►
From YouTube: MASQUE WG Interim Meeting, 2021-04-15
Description
MASQUE WG Interim Meeting, 2021-04-15
A
All
right,
let's
get
going
so
welcome
to
our
april
interim
for
mask
we've,
got
some
fun
things
to
do
and
as
we
get
into
those
fun
things,
this
is
the
notewell.
You
should
all
be
familiar
with
this.
Please
read
it
again
in
case
you
haven't,
read
it
in
a
while,
but
these
are
the
terms
under
which
we
participate
here,
make
sure
that
we
are
reading
this,
paying
attention
to
it
and
especially
being
respectful
and
kind
to
everyone
else,
some
useful
links.
The
notes
are
here.
Thank
you
mike
for
being
our
notetaker.
A
For
today,
we've
talked
about
scribe
selection,
the
notewell
we
can
batch
the
agenda,
we're
going
to
start
off
by
putting
the
first
half
entirely
on
connect,
udp
and
h3
datagrams.
We've
got
a
couple
of
choices
that
we
need
to
make
on
that,
and
it
would
be
good
to
get
some
of
that
wrapped
up.
A
The
other
thing
we're
going
to
do
is
try
to
time
box
this
fairly
strictly
so
we've
put
times
for
each
of
these
things
we're
going
to
put
a
timer
up
on
the
screen
and
let's
make
sure
that
we
stick
within
those
so
that
we
have
time
to
get
through
all
the
things
we
want
to
do.
There's
lots
of
good
discussion.
Sometimes
we
may
need
to
take
some
of
that
back
to
the
list
and
with
that,
let's
get
started
with
h3
datagrams.
C
Thanks
eric
hi
everyone,
I'm
david
schnazzy
and
so
on,
the
agenda
we're
talking
about
both
connect,
udp
and
its
dependency
h3
datagrams,
but
with
there
was
a
lot
of
conversation
recently
on
a
tree
datagrams
and
we
kind
of
realized
that
there
are
some
pretty
big
open
design
issues
and
those
would
like
have
cascading
effects
on
connect,
udp,
and
so
what
we're
going
to
do
today
is
we're
going
to
not
really
discuss,
connect
udp
at
all,
because
a
lot
of
the
open
issues
on
connect2dp
are
kind
of
dependent
on
these
design
decision,
httpd
grams,
so
we're
going
to
kind
of
put
connect
udp
on
hold
until
we
reach
consensus
on
those
points
in
h3dgram
like
not
on,
like
all
of
the
my
new
details,
but
at
least
on
the
fundamental
design,
goals
and
properties,
and
that
way,
then
we
can
go,
go
back
to
connect
udp
after
because
otherwise
we
would
be
kind
of
changing
it.
C
C
It
doesn't
have
defined
semantics
in
http
3,
which
is
an
application
layer
protocol
and
because
http
3
is
multiplexed,
you
can
have
multiple
requests,
so
let's
say:
connect
udp
connect
ip
over
the
same
quick
connection,
and
so
when
you
receive
a
datagram
that
applies
to
one
of
those
you
need
to
be
able
to
tell
which
request
this
maps.
To
oh,
is
this
for
this
connect
udp?
C
You
put
a
vine
at
the
start
of
the
payload
of
the
quick
datagram
frame,
and
then,
after
that,
we
have
something
that
we
call
the
h3
datagram
payload,
which
that
will
then
be
used
by
let's
say,
connect
udp
or
other
things,
but
we
start
with
an
identifier
that
allows
this
demultiplexing.
So
that
has
been
the
design
kind
of
pretty
much
from
the
start
and
there
hasn't
been
any
question
about
how
to
do
that,
but
are
we
done?
No,
unfortunately,
the
fact
that
we
have
a
number
there
isn't
enough.
C
C
So
some
some
requirements
of
like
the
the
specific
design
that
we're
going
to
build,
so
one
we've
been
calling
fo
it
false
start
to
as
a
nod
to
tls
but
conceptually,
let's
say:
you're
doing,
connect
udp
and
you
tell
the
client
sends
a
connect.
Udp
request.
It's
going
to
eventually
receive
a
200
that
everything's
fine
or
you
know
http
status
code
error.
But
you
know
this
is
2021
latency.
Is
we
finally
realize
that
latency
is
important?
We
shouldn't
burn
a
round
trip
to
then
start
our
first
udp
packet.
C
C
Another
important
requirement
is
intermediaries,
so
by
this
I
specifically
mean
http
intermediaries.
So
just
a
quick
note
on
terminology
client
is
the
client.
I
guess
that's
not
a
very
helpful
term,
but
it's
pretty
clear
what
that
one
is
the
proxy
in
connect.
Udp
is
the
one
that
responds
to
the
connect,
qdp
request
and
opens
up
the
udp
socket.
So
the
proxy
is
the
server
http
terms.
The
target
is
whoever
you're
trying
to
connect
to.
C
So
let's
say
if
I
go
to
proxy.example.org
and
I
ask
it
to
connect
udp
me
to
google.com,
but
google.com
is
the
target
proxy
or
example
already
is
the
proxy
and
then
between
the
client
and
the
proxy.
There
can
be
http
intermediaries.
So
that's
what
I
mean
by
intermediaries
and
we
need
to
support
those.
They
make
the
design
a
little
bit
more
complicated
because
they're
things
that
are
per-hop
properties
and
things
that
are
end
to
end.
C
But
when
I
say
per
let's
say
when
there's
an
intermediary
per
hop
means
between
the
client
and
then
a
terminary,
whereas
end
to
end
is
from
the
client
all
the
way
to
the
proxy
again.
These
aren't
too
exciting,
but
we
definitely
need
to
support
them,
because
there
are
many
http
deployments
that
use
them
and
then
the
third
requirement
here
is
extensibility
we're
going
to
build,
connect,
udp
we're
going
to
build,
connect
ip.
C
We
already
have
a
lot
of
interest
from
members
about
extensions
to
these
protocols,
so
we
need
to
make
them
extensible
and
in
a
night
and
in
an
ideal
world
it
would
be
really
nice
if
you
could
extend
connect
udp
by
only
modifying
the
client
in
the
proxy
like
if
you
could
build
extensions
without
having
to
modify
the
intermediary
each
time.
That
would
be
like
really
lower
the
barrier
to
entry
for
extensions,
and,
I
think,
is
a
very
important
feature
all
right
next
slide.
C
Please
now
I'm
going
to
go
through
a
few
examples
to
help
illustrate
what
we
mean
here.
Could
you
move
the
timer
a
little
bit
like
smaller
of
it
to
the
right?
So
this
is
the
simplest
example
where
connect
udp
without
any
extensions,
so
you
put
your
udp
payload
in
h3,
datagram
payload
done
very
simple.
C
You
just
need
a
way
to
associate
your
to
use
the
variant
to
associate
this
datagram
with
your
request,
because
that
request
has
all
the
context
you
need,
which
is
the
target
ipn
port,
that
you
need
like
which
socket
that
you're
going
to
actually
shove
this
udp
payload
on
next
slide.
Please.
C
Now,
let's
do
the
same
thing,
but
let's
assume
someone
wrote
a
connect2dp
timestamp
extension,
which
is
that
it
allows
you
to
convey
at
what
timestamp
a
packet
was
received,
and
so
in
that
case
you
would
have
a
format
that
that
has
a
64-bit
timestamp
for
the
sake
of
argument
followed
by
the
udp
payload.
C
C
This
is
an
example
that
it's
a
bit
more
complex,
but
I
think
we've
already
had
discussion
on
the
list
that
it
would
be
useful
and
so
I'm
going
to
kind
of
dive
into
it
a
little
bit
because
that
informs
kind
of
the
kind
of
things
that
we
want
to
be
able
to
build
here.
So
let's
say
you
have
connect
ip
and
it
allows
you
to
send
ip
packets.
We'll
talk
way
more
about
this
in
the
second
half
of
our
meeting
today.
C
But
let's
just
assume
that
we
go
with
this
design,
and
so
you
send
you
set
up
your
tunnel
you're,
sending
ip
packets
everyone's
happy,
and
then
you
realize
I'm
sending
a
lot
of
packets
that
all
have
the
same
ip
header,
because
it's
for
this
ipad
and
that's
a
udp
header,
because
it's
for
this
one,
five
tuple.
It
would
be
great
to
save
some
bytes
on
the
wire
by
compressing
that
so
let's
say
you
build
an
extension
to
connect
ip,
where
you
say
all
right.
C
I
want
to
negotiate
in
the
middle
of
this
stream
of
this
tunnel.
Let's
now
start
compressing
ips.
So
for
this
specific
ip
header,
I'm
now
going
to
send
this
just
the
ip
payload
to
you-
or
even
you
know-
maybe
just
the
udp
payload
or
what
have
you
some
kind
of
compression
scheme,
and
you
want
to
be
able
to
have
that
multiplexed
with
other
ip
packets,
because
there
could
be.
Let's
say
your
majority
of
traffic
is
for
this
compressed
flow,
but
there
could
be
other
flow
on
the
side.
C
So,
similarly,
you
need
multiple
formats
to
coexist
over
the
same
request
and
just
one
quick
point
on
on
design
there
is
we
had
discussed
on
the
list.
You
know
using
separate
http
requests
here
to
be
able
to
negotiate
these
things,
but
that
doesn't
work
well
with
intermediaries,
because
a
separate
request
can
be
routed
to
a
different
backend.
C
So,
given
this
for
one
request,
which
is
you
know
this
one
association
from
one
client
all
the
way
to
a
proxy
we're
going
to
need
to
a
have
these
multiple
formats
coexist
and
b
be
able
to
negotiate
new
formats
on
the
fly
halfway
through
next
slide.
Please.
C
D
Sorry
this
is
mia
yeah.
I
totally
understand
your
point
about
multiplexing
multiple
formats
and
you
want
to
change
in
the
middle
and
say
now
I
want
to
start
compression
or
whatever,
but
I'm
not
sure
why
you
need
to
negotiate
in
the
middle
of
the
stream
like
if
you,
if
you
know
that
those
ends
or
like
for
the
negotiation
part,
you
really
want
to
know
if
both
would
support
this
kind
of
compression
or
whatever,
and
if
you
know
they
support
it.
D
And
if
you
know
what
the
mapping
is,
you
can
negotiate
everything
at
the
beginning
right,
you
don't
have
to
do
it
somewhere
in
the
middle.
C
So
in
the,
if
you
could
go
back
to
slide
six,
please
eric
in
this
example.
C
As
you
can
see
at
the
beginning,
you
do
not
necessarily
know
what
five
tuple
is
going
to
be
you're,
just
an
ip
tunnel
that,
like
the
fact
that
you
notice
later
that
there's
a
lot
of
traffic
on
the
five
tuple
is
not
something
that
you
knew
at
the
moment.
You
created
the
tunnel.
That's
why
you
need
to
be
able
to
negotiate
it
halfway
through.
D
Okay,
so
I
was,
I
was
having
the
assumption
that
for
basically
every
ip
flow
that
comes
in
you
basically
send
an
unknown
connect
request.
So
you
would
always
just
have
one
ip
flow
there
and
then
you
would
actually
know
you
know
if
it's
an
ip
flow
or
not.
C
D
E
I
I
was
just
going
to
comment
on
this
topic
too,
so
myria,
like
I
mean
I
I
definitely
like.
I
was
coming
from
the
same
mindset
too.
I
think
the
key
tricky
thing
here
is
about
intermediaries
that
it
like,
if
you're
going
to
one
proxy.
Yes,
that
proxy
can
definitely
do
the
right
thing
between
multiple
requests.
E
If
you
have
one
request
end
to
end,
and
then
you
can
do
messages,
control
messages
within
that
single
request
to
kind
of
add
and
remove
things,
it's
less
work
to
stitch
things
together,
so
essentially
think
of
it
as
the
stream
id
being
the
the
request
id.
That's
this
end
to
end
thing
and
that's
all
you
need
to
get
through
intermediaries
and
then,
with
the
end
to
end
you
can
negotiate
whatever
other
ids
or
other
extension
properties
you
want
and
those
are
effectively
hidden
from
intermediaries.
C
Thanks
tommy
totally
agree
ben
schwartz.
F
So
I
think
that
the
the
word
negotiate
here
is
is
possibly
generating
a
little
confusion.
I
think
that
we're
talking
at
least
some
of
the
designs
we're
talking
about
here.
F
The
payload
formats
are
negotiated
once
at
the
top
of
the
request
and
then
instantiated
with
different
parameters
mid-stream
so,
depending
on
what
you
think
the
word
negotiate
means
from.
From
my
perspective,
I
think
of
negotiation
as
the
thing
that
happens
at
the
beginning,
in
the
request
and
response.
C
Okay,
so
maybe
negotiation
was
not
the
the
right
word.
I
apologize.
What
I
meant
to
say
was
this
in
this
connect
ip
example
to
associate.
You
know
the
these
semantics.
Let's
say
the
the
members
of
this
five
tuple
with
a
format
kind
of
in
the
middle,
so
maybe
negotiating
that's
our
word
but
exchanging
this
information
and
I
think
in
further
slides
I
might
use
the
word
negotiation
as
well.
So
please
think
of
it.
That's
that's
not
that's
what
I
mean
not
necessarily
with
that
other
definition.
C
So,
sorry
for
using
unclear
terminology
here
all
right
next
slide,
please
cool!
So
now
that
we've
kind
of
landed
on
a
rough
sense
of
what
we're
trying
to
build
and
some
requirements,
let's
go
into
the
design
discussion.
So
I've
split
this
up
into
a
bunch
of
binary
questions
on
ways.
C
We
can
solve
a
lot
of
these
problems
and
I
on
a
lot
of
these,
I
personally
have
opinions,
but
I'm
not
tied
to
any
of
them
like
I
care
a
lot
about
the
requirements
and
what
the
system
can
do,
but
for
a
lot
of
these,
let's
see
where
we
can
go
as
you
may
notice.
I
have
posted
a
picture
of
a
glass
bike
shed
and
please
don't
go
and
try
to
paint
the
glass
bike
shed.
That
is
a
bad
idea.
C
C
If
we
manage
to
answer
the
top
level
like
architectural
points,
that
would
be
incredibly
helpful
and
then
we
can,
you
know,
on
the
list
or
at
a
next
interim
or
sometime,
very
soon,
really
get
into
the
meat
of
how
we
want
to
encode
these
bits,
because
that
is
also
very
important,
but
let's
first
get
the
architecture
right
and
then
get
all
the
details.
Perfect
next
slide.
Please.
C
All
right,
so
this
was
an
idea
that
was
proposed
by
ben
schwartz
recently,
and
I
like
it
so
one
you
know
intentionally
at
the
beginning
of
this
presentation.
I
was
very
vague
about
using
the
word
barrant
like,
for
there
is
a
number
at
the
start
of
the
quick
datagram
frame
payload
that
that
number
in
the
current
drafts
is
called
the
flow
id
and
the
flow
id.
So
if
you
see
on
the
it's
on
the
left
column
here,
it
is
a
connection-wide
concept.
C
That
is
pro
so
again.
It
applies
to
one
quick
connection,
not
to
an
end-to-end
request
and
it
it
it
solves
two
per
or
handles
two
serves
two
purposes.
One
is
to
map
from
this
datagram
to
a
request,
and
the
other
is
to
additionally
map
inside
that
request
to
what
I'm
calling
in
request
context.
So,
for
example,
in
this
ip
compression
it'd
be
2.5
tuple,
so
you
have
one
flow
id
and
it
gives
you
all
this
information.
C
The
downside
of
that
design
is
that
it's
a
bit
more
complex
for
intermediaries,
because,
since
this
flow
id
is
a
perhaps
concept,
intramurals
need
to
do
translation
on
both
hops
and
so
ben
proposed
a
two-layer
design
where,
at
the
start
of
the
datagram
frame
payload,
you
don't
put
one
var
and
you
put
two
and
one
is
the
stream
id,
and
that
gives
you
an
association
with
the
http
request.
C
So
the
stream
id
is
already
a
per
hop
construct
in
quick
and
so
intermediaries
know
how
to
handle
this,
and
you
would
pretty
much
just
reuse
that
and
give
you
a
direct
mapping
and
then
after
and
so
that
gives
you
a
mapping
from
the
datagram
to
the
request.
And
then
you
add
a
second
number
potentially
encoded
as
a
warrant.
That
is,
we
also
call
flow
id.
C
C
But
it
gives
you
a
much
cleaner
separation
where,
from
an
intermediary
perspective,
you
have
to
rewrite
the
stream
id,
which
you
already
do
when
you're
forwarding
requests,
but
the
flighty
you
just
leave
it
as
is,
and
you
don't
care
and
when
you're
doing
you
know
negotiation,
you
just
send
that
seemingly
back
and
forth.
So
I
want
to
hear
your
folks
thoughts
on
these
two
designs
here.
C
I'm
I'm
personally
leaning
towards
the
the
two-layer
design,
because
I
think
the
the
cost
of
one
bite
is
is
worth
it
for
something
that
is
cleaner
and
simpler
for
intermediaries.
But
I
I
would
love
to
you
know,
get
some
ideally
reach
some
consensus
in
the
room
that
we
would.
You
know,
confirm
on
the
list
about
picking
one
of
those
two
designs.
G
G
C
No,
the
flow
idea
is
inside.
So
sorry
there
is
a
namespace
of
flow
id
for
each
request.
I
guess
so
per
request
was,
in
the
other
sense
of
per
request.
That's
there.
That
was
not
the
right
word.
I
guess
sorry
like
a
a
given
connect.
Udp
request
has
a
set
of
flow
ids.
A
C
C
481
means
timestamps
timestamps.
Thank
you.
Sorry.
I
heard
some
really
weird
notes
threw
me
off
of
course
there
and
then
you
know
so
then
you
and
then
you
can
negotiate
them
midstream
to
have
like
meanings
with
additional
things.
Does
that
make
sense?
Jonah.
G
C
G
Flow
is
abused
all
over
the
place.
I
don't
want
flow,
but
it
sounds
like
it's
specific
to
a
request
and
to
a
context.
So,
okay,
that
helps
me
understand
it
better
and
the
stream
id
here
is
what
flow
id
used
to
be.
C
I
E
So
I
I
also
like
you
think
that
we
kind
of
need
to
go
with
the
two-layer
design
after
ietf
110.
We
had
bounced
back
and
forth
the
number
of
ideas
and
like
independently.
This
is
kind
of
where
I
ended
up
as
well.
I
think
largely,
as
you
were
saying
it
has
to
do
with
the
fact
that
we
have
intermediaries
like
if
intermediaries
did
not
exist,
and
we
did
not
need
to
worry
about
them.
E
Is
this
context-
and
you
know
even
last
time
you
know
we're
having
these
arguments
about
what
does
the
flow
id
mean
and
I
think
some
people
viewed
it
as
it's
just
for
routing
like
a
stream
id
and
some
people
were
saying
like
no,
it
has
meaning
to
what's
this
request
splitting
it
up,
I
think
just
makes
it
a
lot
clearer
and
then
it's
very
clear
that
sometimes
you
will
have
these
degenerate
cases
where
you
don't
need
any
extra
flows,
you're
just
always
using
the
zero
flow
id
or
the
context
id,
because
it's
a
very
basic
thing
and
that's
okay.
B
So
yeah,
let's
do
the
two
layers
design
all
right.
Thanks,
tommy
maria.
D
Yeah,
I
also
think
this
that
two-layer
design
is
is
cleaner
and
and
therefore
nicer.
I
can
also
agree
that
the
flow
idea
is
very
confusing.
I
would
even
propose
to
not
call
it
context
id
but
just
context,
because
I
think
we
should
still
discuss
what
the
what
the
semantics
of
the
context
idea
is.
So
you
could
also
have
an
extension
that
just
defines
that,
like
whatever
these
two
bits
of
the
context
should
be
the
easy
end
bits
right.
D
So
it's
basically
at
the
moment
it's
just
a
field
and
then
an
extension
could
kind
of
assign
any
kind
of
semantics
to
the
field
and
that's
what
we
want.
Yeah.
C
So
thanks
mary,
well,
the
the
reason
it's
an
idea
is
that
specifically,
it
would
be
encoded
as
let's
say,
a
variant
because
quick
likes
variants
and
that
would
map
to
a
context.
That
could
mean
more
things.
So
it
could
mean
ecn,
bits
and
other
things.
And
then
you
could.
You
know
conceptually
just
choose
to
encode
the
ecn
bits
in
there,
but
it
just
stays
conceptually
an
identifier,
but
that
points
at
something.
D
Yeah,
so
for
me
the
difference
is
that
you
know
an
extension
could
rather
define
a
semantics
about
how
to
use
this
context.
While,
if
you
just
have
an
identifier,
then,
basically
for
every
request,
you,
you
would
use
a
different
set
of
identifiers
to
mean
the
same
thing
rather
than
defining
semantics
for
a
field,
but
this
is
something
we
can
argue
about
later.
I
think.
C
Yeah
that
makes
sense
thanks
and
then
in
the
in
the
chat,
because
I'm
going
in
order
klein
mentions
that,
like
rc
8724,
which
is
static,
contact
center
compression
has
kind
of
similarly
a
concept
of
identifier
that
maps
to
a
compression
context.
So
yeah,
that's
that's
how
it
would.
It
is
similar
an
idea
to
what
I'm
just
gonna
call
it
context
id
from
here
on
out,
because
I
really
like
that
name.
But
we
can
pick
the
name
later.
Kazoo.
J
C
Thanks:
okay
matt
in
the
chat
prefers
context
overflow,
great
luke,
curley,.
H
Hello,
so
I
think
my
only
thing
is
well.
First
off
I
really
like
the
stream
id
mapping.
It
especially
makes
it
great
when
you
receive
a
datagram
before
the
connect
request,
and
you
can
tell
oh
hey,
there's
gonna
be
a
connect
request
coming
you
can
you
can
buffer
it?
H
The
only
thing
is
the
the
context
id
seems
it's
a
little
application
or
protocol
specific,
like
I
can
totally
imagine
a
case
where
I
don't
want
to
send
a
zero
on
every
packet
like
it
feels
like
it
could
be
negotiated
on
the
connect
request.
What
the
you
know,
what
the
first
byte
is
going
to
be.
C
So
that
part,
we
have
a
further
slide
to
discuss
so
hold
that
thought
and
we'll
get
back
to
it.
That's
good!
Thank
you,
then,
alex.
K
K
C
All
right,
thanks
alex
then
lucas
has
another
clever
name
for
it,
but
let's
not
try
to
bike
shed
the
name
today.
So
I
see
I
see,
martin
duke
in
the
queue.
L
Yes,
hi
david,
I
have
a
clarifying
question
and
neither
a
comment
or
a
question
depending
on
how
it
goes,
first
of
all,
so
to
be
clear,
the
full
id
or
context
whatever
we're
gonna
call.
It
is
that
globally
unique
can
two
streams.
A
C
So
the
the
namespace
would
be
specific
to
a
request,
so
two
requests-
or
in
other
words
two
streams,
could
reuse
the
same
number
for
different
meanings.
The
the
number
would
be
tied
to
like
the
namespace
would
be
per
like
tied
to
a
request.
L
Okay,
well,
that's
good
that
will
probably
shorten
the
length
of
that
variant.
Then
that's
good.
So
the
other
question
is
about
ecn.
I
mean,
I
think
you
you.
I
know
people
don't
like
you
seeing
an
example,
but
I
care
deeply
about
whether
or
not
we
get
that
in
and
the
the
the
flow
id
is.
I
mean
if
that's,
if
that's
end
to
end
and
like
the
ecn,
signaling
needs
to
be
per
per
hop
right.
So
how
does
that?
How
do
those
interact
or
am
I
am
I
not
understanding
the
problem
correctly.
C
C
So
then
it
is
a
property
if
you
of
it
would
be
a
flow,
slash
context
id.
That
would
be
a
way
to
encode
that
this,
this
udp
payload.
How
do
you
see
it?
But
we
I
don't
want
to
get
too
much
into
the
discussion
of
ecn,
because
there
are
multiple
ways
to
solve
this
and
we
don't
necessarily
agree
on
that.
C
L
C
Cool
thanks,
jana's
back.
G
Yeah
thanks
david,
so
I
I
think
that
if,
if
I
understand
this
correctly,
I
think
that
the
two
layer
design
makes
a
lot
of
sense
and
I
don't
think
of
it
as
tool.
I
guess
it
is
two
layers
in
some
sense,
but
it's
not
because
they're
orthogonal
right.
G
So
in
that
sense
it
is
it
is
it's
it's
it's
two
layers,
but
at
the
same
time
the
purpose
is
completely
different
and
I
don't,
but
I
think,
given
that
the
two
layer
design
seems
to
make
sense
to
me,
I
do
have
one
question
on
on
and
you
can
point
this
to
later.
Basically,
how
are
the
semantics
of
the
context
id
defined?
Are
they
and
you
said
that
they
defined
that.
G
C
Cool
thanks,
then
tommy
says
plus
one
on
this
being
an
id
idea,
not
a
bit
field
because
it
avoids
conflict
better.
I
do
agree
with
that
and
folks
talking
about
the
name
cool
all
right,
we've
drained
the
queue.
The
sense
I'm
getting
is
that
there's
in
the
room,
overwhelming
support
for
the
two-layer
design.
So
we
can
confirm
that
on
the
list
later
next
slide.
Please.
C
C
The
the
concept
is
bi-directional
in
the
sense
that
once
one
endpoint
says
this
context,
id
means
this,
then
you
can
send
in
both
directions
and
another
way
you
could
design
this,
which
some
of
the
proposals
had
is
something
unidirectional
which
any
endpoint
like
where
you
end
up
having
to
request
two
namespaces
one
in
the
client
to
proxy
direction,
one
in
the
proxy
to
client
direction
that
are
separate.
That
might
end
up
being
the
same
but
they're
separate.
C
And
then
you
kind
of
make
a
declaration
unilaterally,
and
you
say
this
id
for
me
is
worth
it
means
this
and
you
don't
negotiate
it
and
whereas,
in
the
bi-directional
sense
you
kind
of
say
I'd
like
this
idea
to
mean
this.
The
other
side
says:
okay,
this
id
means
this
and
then
you
can
use
it
and
that
also
still
allows
false
start
in
the
same
way
that
you
know
the
regular
connected,
ep
mode
does,
and
I
see
questions
so.
I'm
gonna.
N
Lucas
I'll
be
serious
for
a
change.
Sorry.
So
like
I,
it's
not
a
question
more
than
a
comment,
so
I
do
apologize,
but
the
I
think
seeing
the
example
of
like
ip
compression
that
you
presented
in
the
recent
proposals
shows
that
the
bi-directional
design
just
breaks
down
because
well,
for
example,
the
client
is
saying
I'm
gonna,
I'm
gonna,
send
you
this
this
thing
with
some
compressed
ips
and
I
want
you
to
uncompress
it
and
send
that
forward.
N
That
doesn't
necessarily
mean
that
the
packets
coming
back
want
to
be
compressed
in
the
same
way
and
therefore,
if
that's
not
true,
then
the
bi-directional
thing
just
doesn't
make
much
sense
to
me
and
therefore
the
unidirectional
design
is
more
attractive
because
you
can
create
a
two-way,
two-way,
logical
relationship
without
this
kind
of
pretend
bi-directional
thing
that
may
or
may
not
be
true.
C
So
the
that
is
true.
The
the
one
downside
that
I
noticed
in
conversation
this
morning
was
the
the
problem
with
uni
well
with
uni
directional.
Is
that
it's
it's
not
negotiated
like
you,
you
don't
get
a
chance
to
accept
or
reject
a
declare,
a
declaration
from
your
peer,
and
that
could
be
a
dos
vector.
We
don't
want
to
add
anything
in
the
protocol
that
lets
an
end
point
create
incremental
memory
use
on
the
pier,
because
that
could
be
bad,
and
so
what
you're
saying
is,
I
think,
they're
and
you're
right.
C
There
are
use
cases
where
the
bi-directional
will
only
be
used
in
one
direction,
and
you
know:
okay,
we've
wait
like
wasted
a
little
bit.
You
know
it's
a
62-bit
space,
so
it's
not
a
huge
deal.
It
means
in
some
edge
cases
we'll
use
a
slightly
larger
variant,
but
I
don't
see
that
as
too
big
of
a
problem,
and
I
think
that
there
are
some
extensions
that,
like
the
idea
of
being
able
to
say
no,
I
reject
this
this.
C
My
sorry,
this
negotiation
might
be
useful
and
that's
why
I'm
kind
of
personally
leaning
towards
that,
but
again
both
of
those
work.
If
we
find
a
way
to
make
them
work.
N
Just
to
respond,
I
think
I
think
you
can
achieve
what
you
just
said
without
having
to
couple
it
to
being
bidirectional
and
just
have
like
separate
a
tangential
that
the
the
registration,
overflow
or
whatever
you
want
to
call
it,
and
the
rejection
of
it
is
different
than
a
yeah.
I'm
gonna
set
something
up
in
two
ways.
This
is
how
quick
streams
work
that
you
can.
N
C
Okay,
so
what
you're
you're
saying
is
that
we
could
make
it
unidirectional,
but
with
a
negotiation
step
that
that
makes
sense
is
absolutely
okay.
That
is
the
third
option
here.
I'm
gonna
take
quite
like
other
questions
and
comments
and
see
what
folks
think
about
that
that
I
totally
see
that
as
a
viable
third
option.
Ecker.
C
Party,
yes,
I'm
not
so
I
think,
like
pi
directional
is
clearly
simpler.
It
matches
the
sort
of
implied
semantics
here,
which
is
that
the,
which
is
that
the
client
initiates
connections
and
that
it's
in
charge
that
the
proxy
takes
the
client's
instructions
just
to
head
off.
Like
previous
later
suggestions,
you
know,
yes,
we
might
want
a
little
incoming
connections,
but
those
are
the
opposite
and
the
server
should
and
the
proxy
should
be
saying,
here's
a
new
connection.
C
The
client
should
have
pretty
rejected,
and
so
we
should,
if
we
ever
want
to
do
like
the
thing
christian
wants
to
do,
you
can
run
a
server
that
should
be.
That
should
be.
That
should
be
simulated
as
if
it
were
the
the
pure
taking
the
the
proxy
taking.
What
is
currently
the
client
poll
for
that
particular
bit
so
just
to
add
a
clarification
on
that
specific
point
in
the
bi-directional
design.
The
way
you
solve
this
is
by
having
you
know
like
we
do
in
the
rest
of
quick.
C
The
even
numbers
are
client
initiated.
The
odd
numbers
are
server
initiated,
so
this
works
in
either
direction
right.
No,
I
don't.
I
understand
I
understand
what
you're
saying
but
well
okay,
so
I
guess
I
guess
I'm
not
really.
C
What
does
a
server
initiative
flow
mean?
What
are
the
semantics
or
initiative
flow?
So
let's
say,
for
example,
that
in
the
ip
compression
example,
where.
C
So,
very
quickly
the
request
is
client
initiated.
The
client
says
I
want
to
set
up
a
vpn
tunnel
to
you
and
the
server
you
know
accepts
that
and
they're
happily
sending
ip
packets
back
and
forth,
and
then
the
server
can
say:
hey,
I'm
realizing
that
I'm
sending
a
lot
of
packets
with
the
same
five
tuple
to
this
client.
Let
me
tell
the
client
I
would
like
to
compress
these
now
and
that's
a
server
initiated
negotiation
of
a
flow
id.
C
So
then
I
guess
I
don't
I
then
I
don't
understand
what
the
cementite
difference
is
between
these
two
proposed
potential
designs,
because
in
either
case
the
server
can
initiate
arbitrary
number
of
things
and
tell
you
and
try
to
force
them
on
you,
and
so
you
and
so
either
you
either.
C
The
way
the
bi-directional
one
was
set
up
was
that
one
side
sent
something
and
the
other
side
kind
of
echoes
it
to
signal
that
it
likes
it,
but
we
haven't
kind
of
nailed
down
all
the
semantics,
so
conceptually
the
the
fact
that
it's
true
directional
bi-directional
ends
up
being
very
much
the
same
thing.
The
only
important
thing
that
we
need
to
figure
out
is
whether
there's
a
way
to
reject
it
or
not.
I
think
yeah.
I
guess
I'm
also
like
sure.
C
Well,
I
mean,
if
you
think,
it's
a
does
factor
allow
the
peer
to
like
have
infinite
state.
Then
like
then
you
have
to
reject
it.
So
I
mean,
like
you,
know,
yeah.
No,
no
certainly
I
mean
it's
really
going
to
happen
at
the
phone,
but
the
I
guess,
I'm
sort
of
like
I'm
getting
a
little
worried
about
this
conflation
of
flow
to
to
two
compression
state.
That
seems
like
a
really
kind
of
an
odd
an
odd
design
choice.
C
I
don't
think
it
worked
out
that
well
in
turn,
for
instance,
you
know
I
mean
so
like
you
know,
so
so
what
happens?
If
you
have,
you
know
partial
partial
compression
right,
you
know
it's.
You
know,
here's
a
here,
here's
a
new
you
know,
so
I
mean
here's
like
a
dumb
example.
But,
like
you
know,
here's
like
a
new
connection
id,
but
you
know
I'm
going
to
compress
the
I'm
going
to
compress
the
the
packet
number
for
some
reason
right
and
I've
got.
C
I've
got
some
express
the
packet
numbers
or
I
mean
I
guess
I
just
think
like
I'm
just
like.
C
I
think
every
time
we
try
to
like
tag
some
piece
of
state
with
an
integer.
That's
like
that's,
probably
like
a
fail
and-
and
certainly
these
are-
these
are
not
conceptually
extreme.
So
I
think
compressions
aren't
quite
different,
so
I
guess
I'm
not.
The
motivating
example
here
for
like
having
this
sort
of
like
initiation
of
both
directions
is
compression.
Then
I
might
not
present
any
of
that.
C
I
mean
there.
There
are
a
few
other
examples
and
maybe
go
and
grab
the
slides
there
or
you
know,
watch
the
video
at
the
end.
But
sorry,
I'm
not
going
to
repeat
the
whole
thing.
Okay!
Well,
I
guess
I'm
unable
to
come
down
on
one
design.
So
I'm
not
sure
I
need
either
of
these
things
in
the
way.
F
So
yes,
bidirectional
and
unidirectional
both
have
essentially
the
same
dos
vector
problems.
I
think
that's
orthogonal.
I
do
think
that
this
is
a.
This
is
a
great
time
to
talk
about
context
instead
of
flow,
and
I
do
think
that
ecker's
analysis
is
is
on
the
facts.
Correct
like
this
is
a
this
is
essentially
a
compression
context.
F
C
That
is
a
motivating
example
and
take
the
timestamp
extension
is
the
opposite
where
this
flow
id
slash
context
id
would
say
this
is
how
you
parse
this
packet
and
there's
additional
information
at
the
start.
So
it
tells
you
how
to
parse
this
packet.
It
could
be
a
decompressor,
it
could
be
additional
information,
it
could
be
all
sorts
of
things.
F
Sure
in
principle,
though,
I
think
that
we
can
still
represent
it
that
way.
Essentially
all
of
these
cases
that
add
information,
we
could
say
well,
there's
some
there's
some
maximum
state
that
actually
conveys
all
the
information
explicitly
and
then
the
various
context.
Ids
are
different
ways
of
representing
subsets,
where
some
of
that
information
is
not
relevant.
F
So
I
think
that
that
it
is
is
still
essentially
correct
in
all
the
in
all
the
cases
that
I've
seen
so
far
to
describe
this
as
a
compression
context,
I
think
that's
valuable.
I
think
that
we're
that
the
overheads
we're
talking
about
are
high
enough,
that
the
mechanisms
we're
talking
about
are
simple
enough,
that
it's
worth
it
part
of
the
reason.
F
I
think
it's
simple
enough
is
that
I
don't
think
we
need
rejection
because
again,
these
are
all
just
compression
mechanisms,
and
so
the
peers
can
agree
at
the
beginning
what
compression
mechanisms
they
support
and
then,
as
long
as
they
send
each
other
well-formed
compression
context
descriptions.
They
should
be
able
to
compress
without
needing
to
reject.
C
C
F
C
Can
I
assess
a
clarifying
point
because
I
think
I'm
getting
lost
in
this
dust
thing
like,
as
I
understand
it,
if
I'm
going
to
offer
a
new
indicator
that
says
from
now
on,
flo
32
means
it's
got
like
this
ip
address
or
this
other
compression
point.
I
can't
start
transmitting
on
that
thing
until
I've
received
some
acknowledgement
of
that.
That
was
delivered
right.
L
C
Can
you
can
and
it's
it's
false
start?
We
discussed
that
earlier
conceptually.
The
idea
is
you
send
with
this
flow
id
and
if
the
the
other
side
rejects
it
and
sees
the
flow
id
has
rejected
it
just
drops
it
on
the
floor
or
sees
a
fly
that
is
unknown.
Oh
yes,
if
you're
willing
to
take
like
four
rtts
of
like
a
data
loss
situation,
doesn't
work
out.
Sure,
okay,
that's
like
I
like!
I
don't
know,
I'm
not
very
persuaded
it's
gonna
work.
But,
okay,
I
understand
your
point
all
right.
C
I
think
we're
gonna
kind
of
drain
the
queue
up
to
the
point
where
it's
cut
a
bit
quicker.
So
alan.
If
you
could
make
it
a
bit
quick,
I
it
sounds.
C
E
Great,
so
I
I
definitely
agree
that
the
bi-directional
versus
unidirectional
is
separate
from
whether
or
not
we
have
acceptance
or
acknowledgement
of
the
of
the
request
that
initiates
flow
id,
and
I
I
think
the
bidirectional
unidirectional
is
a
total
bike
shed.
I
think
I
agree
with
ecker
that
it's
nice
to
have
bi-directional.
That
seems
to
be
more
consistent
with
a
lot
of
things
that
we
have.
I
could
live
with
unidirectional
though,
but
yeah
I'll
put
a
vote
in
for
bi-directional.
E
About
the
fact
that
you
know
all
of
the
extensions
are
just
compression
contexts
and
that
we
would
never
need
to
have
any
ability
to
reject
or
negotiate
even
if
I
think,
a
dos
attack
is
one
way
to
think
of
it
in
terms
of
increasing
state,
but
there
are
also
cases
where
I
may
very
simply
be
requesting
a
given
context
to
say
that
hey
I'm
doing
this
to
map
for
this
local
ip
address
on
my
ip
tunnel
or
I'm
trying
to
do
this
and
the
proxy
can
just
say
you
know
you
have
an
error
like
this.
E
All
the
packets
you're
going
to
send
on
this
are
going
to
be
dropped,
because
the
address
that
you're
trying
to
send
from
that
you're
binding
this
to
for
compression
you're
compressing
away,
is
not
one
we
assigned
or
let
you
use
for
this
tunnel
or
hey.
You
just
added
two
different
contacts
for
the
exact
same
ecn
marker
or
something
like
this
essentially
say
you
sent
nonsense,
and
this
is
totally
different
from
http,
because
in
http
requests
we
have
a
status.
E
If
we
don't
build
any
way
to
do
that,
then
we
are
losing
something
that
we
have
today
when
we
have
a
kind
of
a
unique
request
per
flow,
and
I
think
it
just
makes
if
we
don't
have
a
way
to
negotiate
it.
It
essentially
only
lets
you
do
it
for
things
that
you
have
pre-negotiated
and
I
think
we
would
have
to
kind
of
pre-approve
everything
in
the
actual
header
request
response,
and
then
you
wouldn't
have
any
dynamic
nature.
E
You
wouldn't
be
able
to
make
sure
that
future
changes
to
what
you're
wanting
to
forward
or
tunnel
are
going
to
be
associated
to
the
right
and
to
end
context
through
intermediaries.
So
I
think
we
should
split
them
up
and
it's
I
don't
see
a
way
that
we
can
get
away
without
negotiation.
O
Okay,
thanks
tommy
alan
yeah,
so
this
question's,
I
think,
related
to
unidirectional
versus
bi-directional
it,
but
if
it's
not
feel
free
to
move
on,
which
is
what
is
the
lifetime?
O
How
long
am
I
allowed
to
send
datagrams
on
a
particular
stream
id
flow
id
pair,
particularly,
am
I
allowed
to
send
datagrams
on
that
stream
id
after
I'm
no
longer
allowed
to
send
stream
frames
on
that
stream?
Id
like
after
I've
sent
the
thin
bit
and
how
does
that
impact
bidirectional
versus
you,
directionality.
C
So
those
are
separate
now
that
we've
agreed
on
the
like
two
layer
or
two
variant
design,
the
stream
id
bit
that's
tied
to
the
stream.
So
we
haven't
spelled
this
out,
but
my
view
is
the
lifetime
would
be
equal
to
the
lifetime
of
the
stream.
So
once
that
stream
has
been
thinned
or
reset,
you're
no
longer
allowed
to
send
datagrams
with
that
stream
id
the
flow
id
is
inside
of
a
given
stream,
and
so
that
would
have
like
again
the
lifetime
tied
to
the
stream.
O
Okay,
I
can
think
of
an
example
where
I
might
want
to
well.
I
can
think
of
an
example
where
I
wanted
to
receive
datagrams
like
where
I
sent
datagrams,
and
then
I
sent
the
fin
on
the
stream,
but
the
fin
arrived
first
and
how
will
the
receiver
handle
those
datagrams
and
again
if
this
is,
if
you
want
to
move
on
and
handle
this
later,
but
I
think
it's
something
that
I
just
want
to
understand.
That's.
C
J
Oh
regarding
and
directionality,
I
think
I
kind
of
wonder
if
we
can
simply
support
both,
because
we
have
similar
discussions
for
quick
streams
and
in
spotting
both
because
the
dictionary
only
approach
turned
out
to
have
many
pitfalls.
And
I
prefer
doing
that
because
we
have
62-bit
space
and
you
know
we
can,
with
some
numbers.
C
That
is
also
an
option
all
right.
Thank
you
cool,
so
the
sense
I'm
getting
on
this
one
is,
we
don't
have
clear
consensus.
Definitely
we
definitely
did
so
we're
going
to
keep
discussing
this
this
one
on
the
list,
but
let's
try
to
answer
some
other
separate
questions.
Next
slide,
please
all
right:
how
do
we
negotiate
these
so
at
the
in
the
current
drafts?
C
This
is
done
using
the
datagram
flow
id
header,
where
at
the
start
of
your
request,
you
say
I
want
to
use
this
flow
id
and
you
can
send
a
list
and
in
that
list
you
use
the
parameters
to
explain
the
semantics
in
the
formats
the
the
problem
there
is.
C
It
prevents
you
from
creating
an
from
negotiating
this
midstream
like
we
talked
about,
which
is
kind
of
sad,
because
we
could
build
some
cool
things
with
that
feature
and
also
the
the
format
of
this
list.
People
didn't
really
love
that
so
another
option
that
we
looked
at
is
to
instead
use
a
register
message
that
you
send
kind
of
midstream,
we'll
talk
in
further
slides.
How
that
message
is
encoded.
It
could
be
an
http
frame.
C
C
Because
the
sense
on
the
list
is
I've,
no
one
was
happy
with
the
header
and
so
far
I've
seen
people
that
were
happy
with
the
the
message.
So
maybe
in
the
interest
of
time,
if
you
let's
keep
it
short
but
yeah,
let
me
know
what
you
think
alex.
K
I
explicitly
support
the
message
because
I
think
that
there
are
useful
capabilities
with
doing
things.
Midstream.
C
Cool
thank
you
edgar
and
just
to
confirm,
okay,
and
so
this
this,
but
this
is
the
same
issues
about
timing
as
previously.
Maybe
you
have
to
worry
about
about
about
risk
conditions
between
the
fluid,
but
in
this
this
message
and
the
actual
flow
flow
messages
right.
C
So
the
they're
tied
questions.
If
we
have
a
message,
then
we
need
to
answer
the
question
of.
Is
there
a
way
to
accept,
reject
this
message
or
not,
which
was
the
question
the
previous
slide,
but
we
haven't
gotten
to
bottom
of
that,
but
that
is
still
something.
We're
gonna
need
to
answer
right.
C
Okay
and
you
wouldn't
allow
me
to
rename
them.
I
imagine
right,
given
that
we
have
a
62-bit
space,
I
would
expect
to
not
reuse,
because
we
had
discussion
about
reusing
last
last,
atf
and
folks
were
saying:
let's
not
reuse,
okay!
Well,
I
think
provisionally.
I
prefer
register
okay.
Thank
you.
Kazuo.
J
I'm
not
sure
if
I
follow
the
benefits
of
registered
floor
id
message
design,
because
if
I
recall
correctly,
we
just
agreed
to
use
two
layer
designs
so
that
intermediaries
can
be
ignorant
of
what
the
contexts
are.
But
now
it
seems
to
me
that
we
are
the
message.
Design
is
adding
for
each
intermediary
to
understand
these
extensions.
C
Oh
no,
so
that
that's
a
very
good
question-
and
I
should
have
explained
this-
I
apologize
so
the
message
would
flow
through
the
intermediary.
The
integrity
would
only
see
like
the
stream
id
and
it
would
not
touch
anything
inside.
It
would
treat
everything
inside,
including
you
know,
the
flow
ids
or
context
id
or
whatever,
or
the
you
know
way
to
encode
what
the
format
is.
C
J
Thank
you
for
the
clarification,
so
I
think
some
of
my
concerns
have
been
resolved,
though
I'm
not
sure
if
any
two
ways
of
exchanging
end-to-end
messages
from
being
the
datagram,
I
mean
the
I
mean
those
things
on
the
data
frames
being
the
stream
chunk
of
some
other
things.
O
Alan
yeah,
I'm
guess
I'm
wondering,
is
it
also
given
that
if
this
passes
through
an
intermediary
that
they
basically
have
to
forward
it
can't
modify
it?
Is
it
also
possible
for
two
endpoints
to
just
pre-agree
that
flow
id
has
a
certain
meaning
without
sending
a
dynamic
registration
of
that
on
the
wire?
C
O
Guess
the
reason
I'm
asking
I
mean,
since
the
intermediaries
have
to
ignore
them
anyway,
maybe
there's
no
way
to
stop
me
from
doing
this,
but
it
also
gets
around
some
of
the
problems
we're
talking
about
with
what
happens
if
this
message
doesn't
make
it
through?
When
can
I
start
sending
things
with
a
given
identifier,
if
it's
sort
of
unilaterally
defined
by
my
extension
that
I'm
going
to
have
the
flow
ids?
Have
this
particular
semantic,
then
I
don't
have
to
worry
about
running
into
those
issues,
so
you,
I
think.
O
G
O
O
C
That's
actually
a
separate
slide
where
I
think
there
are
use
cases
where
folks
don't
need
this,
and
so
it's
talking
about
how
we
make
it
optional
is
is
in
a
cutting
slide.
So,
let's,
let's
put
a
pin
in
that
for
now.
Okay,
then
tommy
says
plus
one
to
the
message
design.
Then
mike.
P
So
I'll
just
point
out
in
terms
of
history
here
the
priority
update
that
http
is
working
on
decided
to
use
both
headers
and
frames
for
something
like
this
using
a
header
so
that
you
get
the
information
with
the
request
and
you
don't
have
to
worry
so
much
about
timing
but
then
using
a
frame
to
update
it
later.
C
Cool,
thank
you.
So
this
one,
I
wouldn't
say,
is
as
clear-cut
as
our
first
one,
but
I'm
kind
of
getting
a
sense
that
some
folk
more
folks
are
leaning
towards
message,
but
we
definitely
need
to
confirm
it
on
the
list
because
that
it's
not
absolutely
clear
next
slide,
please,
which
all
right,
let's
see,
we
have
a
minute
left
I'll,
just
bring
that
one
and
we'll
skip
the
rest
of
the
slides.
C
It
was
on
the
topic
of
things
being
optional,
so
their
cases
take
like
web
transport,
for
example,
that,
in
my
understanding,
don't
need.
You
know
the
second
layer
of
context,
id
or
flow
id
and
don't
want
to.
You
know,
spend
that
extra
bite
on
the
wire
of
when
they
don't
need
it,
and
one
proposal
from
lucas
on
the
list
this
morning
was
to.
C
If
we
go
with
the
register
message,
you
can
add
a
separate
message
which,
instead
of
registering
one
flow
id,
just
says:
okay
for
this
entire
stream,
there
will
not
be
any
flow
ids
like
boom.
The
whole
thing
is
set
that
actually
works
nicely
and
it
works
well
with
false
start
as
well.
So
I
think
that
gives
us
kind
of
the
best
of
both
worlds,
where
methods
and
or
other
kinds
of
extensions
that
don't
need
this
don't
have
to
use
it,
but
it's
built
in
such
a
way
that
allows
us
the
accessibility.
C
C
A
A
Chris,
before
we
yeah
so
we're
just
about
at
time
for
this,
it
seems
like
we're
making
really
good
progress,
and
this
is
blocking
some
of
our
other
progress
on
the
rest
of
connect
udp.
So
we
do
have
the
option
to
drop
the
rest
of
our
as
time
permits
things.
If
folks
feel
that
this
is
valuable,
and
we
can
add
that
time
back
into
here-
that's
the
meaning
of
his
time
permits.
C
C
All
right,
let's
go,
then
ben
schwartz
was
first
in
the
queue.
F
Hey
I,
I
don't
think
this
topic
is
actually
that
high
priority,
like
connect
udp
as
a
whole
sure
like
this
specific
message,
not
so
much,
but
since
we're
talking
about
it,
I
I
don't
think
this
is
necessary
if
you're
in
a
way,
if
you're
in
a
web
transport,
you
know
you
don't
and
apparently,
if
the
design
doesn't
call
for
flow
ids,
then
you
know
you
don't
need
flow
ids
and
we
don't
need
a
frame
to
talk
about
it.
You
at
both
sides
already
know
if
you're
in
connect,
udp,
we've.
G
F
G
F
C
C
You
can't
parse
it
no
matter
what
there's
no
point
in
parsing
the
flow
id,
because
if
even
if
you
had
access
to
the
flow
id,
you
wouldn't
know
what
to
do
with
it.
So,
as
until
you've
received
a
single
register,
any
registered
message,
you're
gonna
have
to
buffer
or
drop
these,
and
the
moment
you've
received
one
boom.
You
know
in
which
mode
you're
on
yes.
F
Make
this
more
complex,
it
adds
states
to
your
state
machine.
It
means
that
instead
of
I
get
a
packet
that
came
in,
I
look
at
the.
I
look
at
the
context
id
to
find
out
what
to
do
with
it.
I
see
if
I
have
a
matching
context
id
and
then,
if
I
do,
I
go
otherwise
I
buffer.
Instead,
I
have
to
have
an
additional
check
and
an
additional
asynchronous
state
where,
before
I
even
attempt
to
look
at
the
flow
id,
I
first
have
to
wait
until
I've
figured
out
whether
I'm
in
the
flow
id
state.
F
I
have
another
set
of
interacting
asynchronous
events
that
to
to
juggle
it's.
I
would
for
that
reason.
I'd
say
if
we
need
to.
If
we
really
need
this,
then
it
should
go
in
an
http
header
so
that
it's
finalized
before
we
start
receiving
data.
C
G
Thanks
david,
I
I
was
gonna
say
I
think
I'm
gonna
try
and
parse
and
respond
to
what
ben
said,
because
I
still
need
to
think
about
that.
But
I
think
that
you
need
something.
I
think
you
pointed
out
that
you
need
some
sort
of
registration
mechanism.
Otherwise
the
implicit
one
makes
the
timing
difficult.
I
don't
know
how
to
establish
synchronization
if
you
don't
have.
If
you
have
an
implicit
one,
so
you
need
something.
G
I
G
One
to
this,
I
I
don't
want
to
speak
to
headers,
because
I
I
think
the
whole
point
of
this
was
to
keep
this
end
to
end.
It
seems
like
having
this
message
would
be
the
simplest
thing
to
do.
C
C
Yeah,
so
I'm
not
sure
what
I
think
about
this
particular
thing,
but
I'm
getting
the
impression
we
don't
actually
have
a
good
handle
on
false
start,
which
your
people
are
begging
you
to
rename
as
we
might
want.
So
I
think
perhaps
like
we're
fleshing
out
in
some
detail.
What
like
really.
This
means,
as
I
pointed
out
in
the
chat
there.
Are
there
really
two
semantics
right?
C
One
is
I've
sent
a
setup
packet
and
if
I
said
a
packet
fails
then
like
these
packets
will
be
irrelevant
and
that's
like,
if
I
say,
connect
to
this
like
remote
location
right,
if,
like
I
see,
if
I
send
you
a
connect
request
to
an
endowed
ip
address,
like
nothing
is
going
to
salvage
that
in
the
mirror
packets.
C
I
said
this
is
going
to
flow
and
should
be
thrown
away,
and
the
other
is
a
soft
state,
and
I'm
saying
like
set
up
this
kind
of
compression
context
and
then,
if
that
that
packet
gets
lost
then
actually
like
this
is
somewhat
different
and
because
those
packs
will
then
should
be
admitted
by
either
either
by
the
end
or
by
the
intermediate
point.
And
so
those
are
conceptually
somewhat
different
and
we
might-
and
it
might
have
different.
C
It
might
in
fact
have
different
requirements
in
terms
of
buffering
in
terms
of
in
term
buffering
versus
transmission.
So
I
I
guess,
I
think
it's
absolutely
probably
helpful
to
try
to
break
out
what
people
really
expect
or
whether
those
people
buffer,
because
another
valid
implementation,
especially
in
the
one
in
the
in
the
version
which
is
which
is
connect
to
the
other
side,
would
be,
would
be
merely
to
drop
the
packets
right
and
and
count
on
the
encounter
or
gender
retransmit,
which
would
actually.
C
Rate
control,
behaviors,
yeah,
so
the
way
I
think
about
it-
and
you
know
tell
me
if
this
doesn't
make
sense-
is
I
really
think
those
reduce
to
the
same
problem.
So
let
me
try
to
say
some
more
conceptually.
The
idea
here
is
you're
doing
some
kind
of
negotiation
step,
and
you
know
this
applies
to
far
starting
tls
by
the
way
where
you
say.
Okay,
I
want
to
do
this.
You
know
this
being
set
up
a
trs
connection
to
you
and
then
you
say:
okay,
let
me
just
send
data
before.
C
I
know
whether
you're,
okay
with
that
or
or
not,
and
that
you
know
if
you're
okay
boom,
we've
saved
time
everyone's
happy.
If
you're,
not
okay,
this
data
went
into
a
black
hole.
It's
dead
right,
like
that's,
that's
kind
of
the
fundamental
property.
If
that
and
then-
and
you
get
some
kind
of
notification
that
that
that
happened
conceptually,
then
you
have
a
decision
to
make
about
retrying
and
the
retry
could
be
at
any
layer
in
the
stack.
C
So
I
think
all
of
like
your
two
separate
cases
really
are
the
same.
One
in
my
mind,
am
I
missing
something?
Well,
I
think
only
if
you
think
layers
to
protocol
layers
doesn't
matter.
You
know
which
I
do
so.
I
mean
think
like
which
layer
of
things
happen
that
matters,
and
so
in
one
of
these
cases
the
the
the
issue
is
handleable
at
the
mask
layer
and
and
other
other
layers
is
not
in
particular.
C
If
you
know,
if
I
try,
if
I
ask
you
to
bind
to
an
invalid
ip
address,
you
know
I
ask
you
to
or
once
forbidden
by
policy,
then
that
that
has
to
report
an
error
to
the
month
to
the
consumer
or
mask
and
it
has
to
handle
it,
whereas
in
the,
whereas,
if
what
happens
is
the
packets?
C
If
what
happens
I
ask
for
you
know,
I
say
please
initially
new
compression
context
and
you're
like
sorry,
I
don't
have
I'm
going
to
have
30
and
you've
already
done
30.
and
that
has
to
be
handled
at
the
mask
layer.
It's
all
level
right,
so
I
think
I
think
I
think
something.
I
think
that,
like
I
mean
yes
so
sure
at
some
level,
yes,
like,
I
may
get
bored
and
hit
the
real
head
butt
on
my
browser,
I
think,
but
I
mean
like.
C
I
think
this
I
think
delaying
points
do
matter,
but
I
guess
I'm
just.
I
think
I'm
just
generally
saying
this.
You
keep
saying
buffering
on
the
receiver's
side,
but
like
is
that,
but
I
mean
like,
as
I
say,
is,
is
are
are:
is
it
perfectly
valid
implementation
to
simply
discard
anything
which
you
don't
have
on
a
parse
and
if
that's
true,
then
it
should
be
optimizing
for
buffering
at
all.
C
So,
first
off.
Yes,
if
you
get
something
that
you
don't
know
the
semantics
of
and
you
drop
it
on
the
floor
instead
of
buffering
it,
that
is
perfectly
legal.
The
buffering
is
optimization
in
case
you
get
reordering
so
take,
for
example,
in
quick.
We
do
this
for
zrtt.
C
If
you
get
zero
rtt
before
the
client,
hello,
you
can
drop
it
on
the
floor
or
you
can
decide
to
buffer
it
in
hopes
of
getting
the
the
clientele
there.
Both
work
one
is
faster
than
the
other,
but
cost
more
memory.
That's
what
we
do
in
our
implementation
of
quick,
for
example,
and
I'm
kind
of
really
envisioning
the
same
property
here
sure
I
think
that
that
might
be
helpful
like
to
to
document
more
clearly.
I
think,
and
so
it's
a
concrete
example
of
what
ben
was
saying
right.
C
You
know
if
I
get
a
if,
if
I
don't
know
whether
or
not
flow
ids
are
in
that
are
valid
or
not,
are
gonna
are
gonna
apply.
Then,
when
I
get
these
packets
I
don't
know
how
to
deal
with
them
right
and
and
maybe
and
and
me
if
I
knew
what
the
flow
ids
were.
I
know
how
to
sort
them
or
bucket
them,
or
something
like
that.
C
So
I
think
it's
like
yeah
exactly
what
the
receiver
the
packet
knows
when
it
receives
it's
important
information
and
if
you're,
gonna,
buff
them
anyway,
like
I
just
I
think,
it'd
be
useful
to
like
actually
map
out
a
little
more
detail.
How
we
expect
this,
this
opposition
to
work
for
the
structure
absolutely
and
just
to
be
clear.
The
goal
here
was
to
talk
about
like
10,
000
foot
properties.
C
When
you
know
this
will
turn
into
a
pr
with
a
bunch
of
text
that
will
hopefully
answer
these
questions,
but
you're
right
we'll
have
to
write
that
nicely
cool
thanks.
Then
there's
a
bunch
of
discussion
on
having
a
better
name
for
false
start,
happy
to
call
it
something
else.
Martin
duke.
C
Oh
okay,
lucas.
N
I
I
just
want
to
give
an
example
like
having
done
something
with
connect
udp
in
in
a
client.
No,
it's
really
easy
to
do
a
one
shot,
connect
udp
and
provide
a
quick
initial
packet
and
then
kind
of
just
switch
contacts
and
move
on
to
some
other
work
and
on
the
receiver
side,
yeah,
okay,
it's
gonna
receive
the
connect
qdp
and
then,
and
then
it's
just
gonna
respond
with
the
200
there's,
not
a
wait
for
a
tcp
and
an
ack
and
all
of
that
stuff.
N
So,
like
it's
nice,
it's
a
nice
property
just
to
send
some
stuff
and
yeah.
If
the
packet
gets
dropped
by
by
the
mask
server,
it
doesn't
buffer
it.
Okay,
I
can
wait
for
a
re-transmission
timer
to
fire
and
do
some
stuff
like
I
personally,
I
don't
see
the
issue
with
whatever
this
thing.
We
want
to
call
it
fast,
open
or
whatever,
like
it's
a
nice
thing
to
keep.
In
my
opinion,.
C
Cool
thanks-
and
I
really
agree,
and
actually
for
the
for
the
record
in
my
implementation
of
connect
udp,
the
client
doesn't
wait
for
the
http
200
before
it
starts
sending
datagrams
and
the
server
doesn't
buffer
them.
If
the
server
gets
a
datagram
or
a
flight,
he
doesn't
know
drops
it
on
the
floor.
Everything
works
great
because
yeah,
if
you
conceptually,
if
that
packet
gets
lost,
it's
the
same
from
the
client's
perspective.
C
K
Alex
thanks
david.
I
wanted
to
actually
address
something
that
echoer
was
saying,
which
I
found
a
little
bit
confusing
when
we
were
talking
about
things
like
the
compression
contexts
and
saying
that
you
can't
afford
to
lose
that.
I
I
think,
that's
fundamentally
a
type
error
right.
If
you're
sending
something
like
a
compression
context
that
you
can't
afford
to
lose
on
an
unreliable
datagram,
then
I
think
that
extension
is
is
malformed.
If
you're
sending
data
like
that,
that
would
be,
I
don't
know,
potentially
sent
along
with
this.
I.
C
I
don't
think
that's
what
ecker
meant
and
I'm
just
gonna,
maybe
rephrase
it
differently
and
echo.
You
can
correct
me
if
I'm
wrong
he's
saying
what,
if
you
have
a
packet
that
you
want
to
compress,
and
you
say
I
want
to
do
compression
you
send
that
reliably,
but
then
the
other
side
says
no.
I
don't
want
to
compress
for
you
I'm
out
of
memory,
and
then
you
have
conceptually
you
could
say.
C
Okay
now
I
want
to
retransmit
this
uncompressed
and
this
goes
back
to
what
we
were
talking
about
false
start,
which
is
we
guess
you
could
do
this,
but
it's
probably
a
lot
simpler
to
just
say:
oh
well,
that
package
got
lost
so.
C
Cool
thanks
definitely
agree
all
right,
so
that's
the
end
of
the
queue
for
this
slide.
We
ended
up
not
really
talking
about
this
slide.
Clearly,
we
need
more
discussion
on
this
one.
We
don't
have
enough
to
declare
any
kind
of
consensus,
but
that
way,
I
think
we
set
the
stage
so
that
so
by
the
way,
yeah
I'm
gonna
file
an
issue
for
each
of
these
slides
and
we
can
keep
the
conversation
going
on.
All
these
to
you
know,
drive
consensus
eventually,
next
slide.
C
Please
so
this
I
have
like
we
have
a
few
things
about
what
the
message
would
look
like
it's
you
know,
assuming
that
we
had
decide
to
go
to
the
message
and
not
the
header
and
I'm
hoping
like.
We
should
still
go
through
these
because
going
for
the
message
sounds
like
a
very
possible
outcome
and
also,
I
think
discussing
this-
will
help
people
understand
what
the
message
kind
of
looks
like.
So
the
question
is:
okay.
Let's
assume
that
we
agree
that
we
want
this
message:
how
do
we
send
it
on
the
wire?
C
What
does
it
look
like
the
two
proposals
that
I've
seen
so
far
are
so
the
one
on
the
left
is
flow
id
0.,
you
reserve
datagram
flow
id
0
as
a
control
channel,
and
so
that
has
the
neat
property
that
you
know.
Datagrams
are
already.
Intermediaries
know
how
to
handle
them
and
check
them
all
the
way
end
to
end
anyway,
and
so
they
don't
have
to
do
any
more
work
for
this.
C
C
So
it
requires
adding
a
reliable
datagram
frame
that
the
interpreter
needs
to
know
about
which,
like
it's
at
the
http
3
layer,
and
it
has
roughly
the
same
semantics
as
the
quick
datagram
frame
apart
from
it,
doesn't
mean
the
stream
id,
because
it's
already
inside
an
http
3
stream.
It
just
has
the
flow
id
and
then
the
alternate
design
here
on
the
right
is
we
create
an
h3
frame
for
this?
It's
a
cleaner
separation.
It
doesn't
have
this
notion
of
wait.
What
do
you
mean?
C
You
can
send
this
unreliably,
but
it
means
that
the
intermediary
needs
to
like
know
how
to
parse
this
frame
and
by
parse
I
just
mean
like
look
at
the
http
3
type
and
go
oh.
This
is
this
copy,
the
the
data
and
send
it
to
the
right
back
end,
or
vice
versa,
to
the
right
client
and
that's
it
no
parsing
of
what's
inside
no
looking
at
the
flow
ids,
no
looking
at
any
of
the
extension
information,
just
copy
paste
do
folks
have
thoughts,
so
I
see
ben.
F
F
C
Okay,
then
I
will
say:
let
please
send
us
your
what
would
be
even
better,
because
I
love
to
hear
it,
and
I
have
actually
a
few
slides
on
this
message
design.
So
then,
let's
maybe
breeze
through
them
a
little
bit.
So
we
see
kind
of
all
the
possible
questions
next
slide,
please
so
there
and
again
this
was
part
of
ben's
proposal
where
ben
proposed
the
one
on
the
right,
but
pretty
much.
We
have
two
ways
of
doing
this.
C
We
either
say:
okay,
this
frame
is
just
right,
just
our
flow
id
and
that's
it
or
we
add
in
yet
another
layer
of
abstraction,
because
those
abstractions
are
great,
where
this
frame
slash
fly
e0
or,
however,
we
want
to
do.
It
means
this
is
a
control
message,
and
then
you
have
a
variant
that
says
what
message
is
this?
Where
you
know
zero
would
be
register
flow
id
and
then,
but
it
allows
extensibility
there.
C
My
gut
feeling
there
is
that
it
reminds
me
of
sni
which,
had
you
know
one
of
those
extensively
joints
that
immediately
rusted
shut,
meaning
that
s
and
I
you
can
say
what
type
of
it
and
you
can
only
say
hostname,
so
it
was
kind
of
a
waste
and
but
the
the
one
property
has
that
I
think
is
kind
of
cool.
Is
that
the
extensibility?
C
Unlike
something
like
this
tonight,
the
extensibility
becomes
end
to
end,
whereas
if
you
say
define
a
new
h3
frame
every
time
you
do
this,
you
would
have
to
modify
the
intermediaries,
whereas
this,
if
you
have
the
attorneys,
just
copying
things
back
and
forth,
we're
not
looking
at
them
at
all.
That
allows
you
to
have
an
extension,
that's
end-to-end
without
modifying
the
interpreter,
so
that's
kind
of
neat
all
right
next
slide.
Please.
C
Another
part-
and
so
this
one
that
is
even
more
of
a
bike
shed,
so
let's
not
completely
dive
into
how
we
should
do
it,
but
just
to
keep
it
in
the
back
of
people's
mind
you
in
this
frame,
you
need
to
have
the
flow
id
to
say.
Okay,
I'm
registering
this
flow
id,
but
you
also
need
more
information.
So,
for
example,
like
I'm,
registering
flow
id
42,
and
I
want
to
do
ip
compression
with
this
ip
and
this
port.
C
So
right
now
it's
a
list
of
text
key
value
pairs,
because
that's
just
easy
in
one
of
the
proposals
we
talked
about
doing
q
pack
and
then
people
said,
wait
you're,
inventing
middlers,
we
don't
want
midler.
So
I
we
said
absolutely
not:
let's
not
do
midlers.
I
agree.
We
could
also
have
a
binary
encoding.
That
is,
you
know
more
akin
to
tlvs
and
how
quick
transfer
parameters?
Look,
that's
something,
but
that
is
definitely
on
the
color
of
the
bike.
C
Shed
spectrum
so
we'll
have
to
we'll
have
to
figure
that
one
out
next
next
slide.
Please
and
then
another
point
which
was
do
we
need
the
reliable
datagram
frame.
So
there
are
some
cases
where
we're
saying
it
would
be
nice
to
be
able
to
send
datagrams
reliably,
sometimes
like
for
connect2udp.
Some
folks
were
saying.
Well,
I
would
like
to
send
my
first
udp
packet
reliably,
because
if
I
lose
the
the
connect
request,
there's
it's
the
server.
C
The
proxy
is
not
going
to
be
able
to
parse
it
until
I
retransmit
that
so
might
as
well
same
shove,
the
first
gdp
back
into
that
and
then
retransmit
it
as
well
in
in
the
design
where
we
use
flow
id
0
as
a
control
channel.
We
absolutely
need
this
because
that
way
we
want
to
send
those
as
reliable
datagrams,
but
in
other
designs,
it's
more
of
a
nice
to
have,
because
you
could
also
do
it
by
defining
a
tlv
encoding
on
the
data
stream.
C
The
only
downside
here,
well,
the
only
property
here
that
is
goes
both
ways
is
it
requires
a
territory
support,
just
like
you
know,
parsing
the
datagram
frame.
So
if
we
put
it
in
early,
then
we
have
it.
It
costs
a
bit
more
first,
but
if
we
don't
do
it
now,
it
might
make
it
much
harder
to
do
it
later,
because
updating
intermediaries
is
hard
so
again,
another
open
question:
we
have
here
a
question
from
jonah.
G
A
quick
comment:
I
mean
this
ties
to
everything.
I've
been
I'm
trying
to
think
about
how
to
do.
You
want
to
do
the
if
my
destiny
is
correct,
I
think
it
will
be
helpful
to
actually
characterize
what
the
properties
are,
that
you
want
for
the
register
messages
and
then
figure
out
how
to
map
them
to
quick
primitives.
G
So
that's
kind
of
weird,
and
I
think
we
have
a
name
for
that.
It's
called
streams.
We
might
want
to
consider
something
that
you
use.
I
don't
want
to
design
it
right
now,
but
it
just
seems
that
reliable
data
is
nothing
but
a
stream.
We
might
need
to
figure
out
how
to
do
ids
and
and
associate
things
with
things,
but
it
seems
like
we
might
want
to
go
back
to
looking
at
the
perimeters
that
quick
offers
to
to
map
to
the
requirements
of
these
messages.
C
So
so
to
clarify
jonah.
This
is
this
would
be
an
http
3
frame
and
that's
one
of
the
things
that
we
kind
of
messed
up
in
quick,
quick
frames.
Http
3
frames
are
complete
different
things,
so
at
the
quick
layer
this
would
be
sent
over
a
stream.
The
idea
would
be,
how
do
you
encode
a
datagram
inside
a
stream,
and
you
say
this
is
a
frame
at
the
h3
layer
and
I
apologize.
I
should
have
spelled
that
out
in
the
slides,
so
that
that's
why
that's
how
this
would
look.
G
Oh
no,
I
get
that,
but
I
think
again
the
same
thing
applies
to
http
3
right.
The
default
assumption
is
that
it
is
reliable.
The
whole
point
with
datagram
the
whole
point
of
writing.
This
draft
is
so
that
you
can
actually
afford
to
use
http
3,
along
with
unreliable
datagrams,
agreed
and
so
to
me,
reliable
datagram
again,
even
in
the
http
3
world
itself
sounds
like
a
data
frame.
C
Okay,
I
mean
this
is
just
a
way
to
send
the
same
like
the
same
semantics,
but
over
a
stream.
It's
another
way
to
say
it
is.
You
could
build
this
without
a
frame
and
say
like
have
your
congestion
controller
retransmit
that
datagram
frame,
but
it's
simpler
to
just
have
one
one
layer
above
this
here,
but
again
you
could
put
it
in
the
data
stream
and
then
you're
done.
That
is
totally
a
valid
option.
P
Cool
thanks
mike,
so
I
just
wanted
to
note
another
way
of
thinking
of
this.
In
the
h3
context,
we
defined
all
h3
frames
to
be
hot
by
hop
and
only
the
request
itself
is
end
to
end.
This
is
effectively
an
end-to-end
wrapper
for
nh3
frame.
P
C
Okay,
that's-
that
is
an
interesting
thought.
I
don't
want
to
do
two
things
that
are
too
generic,
but
yeah
that
could
be
neat
all
right.
Thank
you,
okay,
and
this
brings
me
next
slide,
please
to
the
end
of
our
slides
on
h3dgram.
C
So
I'm
really
happy.
We
landed
well
looking
for
on
the
list,
but
we
landed
on
in
a
good
spot
for
the
one
two,
so
the
two
layer
or
what
I
call
the
two
layer
design
a
lot
of
the
other
questions.
We
will
need
to
talk
about
more
and
that's
cool
I'll,
take
an
action
item
to
file
a
github
issue
for
every
single
one
of
these
and
then
I'll
email,
the
list
and
please
double
check,
and
if
I've
forgotten
anything,
please
follow
more
issues.
C
I
think
it
would
be
great
for
us
to
like
discuss
these
more
and
chairs.
Maybe
at
some
point-
maybe
I
would
love-
probably
prefer
another
interim
next
month
for
us
to
make
more
progress
here,
depending
on
how
we
make
progress
on
github
and
on
the
list,
but
then
otherwise,
back
to
you
chairs,.
A
Thanks
david,
all
right
sounds
like
we've
got
enough
to
make
some
good
forwards
progress
there.
Please
do
update
us
when
we've
gotten
a
little
bit
of
that
done
and,
of
course,
we'll
keep
the
discussion
going
on
github
and
on
the
list.
Next
up,
we
have
a
shortened
quick
aware
proxy
in
here.
E
E
Other
cases
of
how
do
you
deal
with
the
sizes
that
you
want
the
initial
sizes
for
packets
for
quick
within
quick
that
actually
could
apply
to
connected
udp
in
general?
And
so
maybe
those
could
move
over.
But
specifically,
the
extension
mechanism
in
here
is
about
letting
the
proxy
be
aware
of
the
connection
ids,
and
that
allows
a
couple
different
things
one.
E
It
does
allow
you
to
reuse
ports
between
a
proxy
and
a
target
so
that,
if
it
is
having
to
use
v4
or
just
has
a
ton
of
connections
going
through,
it
doesn't
need
to
actually
open
up
necessarily
a
new
udp
socket
for
every
single,
quick
connection,
that's
being
proxied,
since
the
connection
ids
can
do
the
job
of
demultiplexing.
E
That
also
has
some
effect
of
obscuring.
How
many
clients
you
have
exactly
and
then
the
other
big
aspect
is
improving
the
efficiency
performance.
So
this
has.
There
are
kind
of
two
problems
here.
One
is
if
you're
doing
a
lot
of
quick
within
quick.
There
can
be
just
processing
overhead-
that's
not
necessarily,
as
ecker
was
pointing
out
on
the
list,
not
necessarily
the
crypto
being
the
concern,
but
just
some
of
the
the
fact
that
you
have
to
process
it
in
software
oftentimes
in
a
deployment.
But
then
also
there's
a
concern
about.
E
E
A
lot
of
what
I'm
thinking
about
here
is
when
you
have
multiple
proxy
hops
being
used
to
kind
of
further
obfuscate
what
ip
address
is
being
used
and
not
allow
one
proxy
to
unilaterally
kind
of
know
everything
about
a
flow.
E
E
It
is
not
absolutely
not
for
everyone
and
not
for
all
cases.
It
does
allow
you
to
if
you
have
an
observer
across
multiple
parts
of
a
path
to
be
able
to
correlate
packets,
so
that's
not
always
applicable
and
always
useful,
but
in
cases
where
that
is
not
kind
of
the
main
concern
or
attack,
then
this
is
useful.
It
is
critical
for
this
that
both
parties
must
consent
before
any
type
of
forwarding
mode
is
done
thanks
a
lot.
E
So
I
think
the
key
thing
for
us
is
if
we
are
doing
multiple
hops
of
connected
udp.
The
performance
definitely
is
impacted
if
you
are
able
to
have
quick
awareness
and
start
doing
forwarding
mode.
So
in
tests
of
using
a
quiche
based
h3,
you
tested
both
with
kind
of
normal
connect.
Udp,
sorry.
M
E
Yes,
which
quiche
sorry,
the
the
one
that
lucas
works
on
the
the
cloudflare
based
quiche,
not.
E
So
using
the
quick
awareness
to
program
forwarding
rules
based
on
quick
connection,
ids
into
the
nick
using
ebpf
rules,
and
so
this
was
just
running
a
test,
for
example,
on
a
one
gig
ethernet
link
to
h3
next
top.
So
next
slide
I
mean
it's
so
on
this
link,
you
know
directly
with
h3.
We
could
get.
E
E
So
in
cases
where
this
is
appropriate,
it
makes
it
essentially
free
to
go
through
one
more
hop,
and
you
know
it's
not
for
treating
this
as
like
a
pure
vpn
tunnel.
But
essentially,
if
you
want
to
be
able
to
have
a
quick
router
that
you
can
have
a
handshake
with
and
say,
I
want
to
route
through
this
hop
and
then
that
hop
in
that
hop.
It
gives
you
a
very,
very
efficient
way
to
do
that
next
slide.
E
So,
as
we've
been
working
on
this,
there
are
a
couple
of
interesting
issues
that
have
come
up.
Some
of
these
we've
already
fixed.
These
are
slides
from
itf
110
and
we
did
come
out
with
a
wrap
of
the
dock
just
this
week.
E
That
tries
to
start
talking
about
some
of
these,
and
I
do
think
that
some
of
these
issues
are
things
that
we
could
bring
back
into
the
overall
conversation
and
thought
around
connect.
Udp,
particularly
when
you
are
doing
quick
specifically,
you
have
cases
where
you
have
to
worry
about
the
mtu
and
even
kind
of
regardless
of
changes
in
mtu.
E
E
Because
you
know
clearly,
you
wouldn't
want
the
connection
between
the
client
and
the
proxy
to
be
constrained
to
1200
bytes,
because
then
you
wouldn't
even
be
able
to
put
a
quick
initial
packet
through
that
tunnel,
but
even
if
you,
as
you
do
for
some
implementations
like
what
google
does
of
hard
coding,
it's
just
1350,
you
probably
don't
want
to
just
try
to
stick
1350
inside
a
1350,
quick
tunnel
because
that's
not
going
to
fit
so
we
need
to
have
some
thoughts
around
path,
mtu
discovery,
et
cetera,
doing
connection
migration
is
also
quite
interesting
when
you
have
a
proxy
here,
there
needs
to
be
thought
about.
E
If
you
are
resetting
your
congestion
control
timer.
How
does
that
get
propagated
up
to
the
the
target
that
you're
connecting
to
and
then
one
other
thing
that
we've
already
done?
Is
that
it's
important
that
if
you
are
able
to
share
sockets
between
the
proxy
and
target
that
you
can
do
it
potentially
between
quick
flows
where
you
can
actually
distinguish
between
them
using
connection
ids,
but
it's
not
necessarily
safe
to
do
that
for
other
flows
anyway,
I
think
that's
the
end
of
these
slides,
so
yeah.
E
We
don't
need
to
spend
a
lot
of
time
with
questions
or
anything,
but
this
is
just
one
of
the
kind
of
motivating
areas
that
we
can
work
on
as
extensions
to
connect.
Udp.
C
Yeah,
I
guess
I'm
really
quite
surprised
performance
numbers
here,
so
you
know
like
we
have
plenty
of
examples
of
people
doing
the
line
rate
tls
as
substantially
higher
than
gigi.
So,
like
I
mean
this
netflix
paper
from
a
few
years
ago,
showing
how
they
did
it
so,
like
I
don't
know
what
the
situation
is
here
but,
like
the
you
know
like
if
the
essential,
which
is
which,
like
the
particular
quotation
you
had
can't
like
handle
like
a
bit
of
load,
have
quick
load
like.
C
O
E
Sure
so
I
mean
to
be
clear,
I
think.
Even
you
know,
these
are
numbers
from
march.
I
think
keisha's
own
implementation
for
some
of
the
stuff
has
improved.
Since
then,
however,
I
think
there
is
something
about
you
know
scalability
here,
like
being
able
to
do
a
hardware
offload
of
rules
to
essentially
turn
this
into
a
a
quick
handshake
to
a
router
that
essentially
then
just
becomes
a
router.
E
So
it's
like
a
secure,
authenticated
router
is
like
that
box
is
going
to
be
able
to
have
a
lot
more
capacity
if
it's
just
programming
rules,
even
if
it
is
able
to
achieve
good
line
rate,
it's
going
to
be
able
to
handle
more
connections
if
it's
just
doing
hardware
offload
rules
so
again
like
I
do
not
think
it
should
be
used.
E
Always
all
the
cases
that
we're
looking
at
using
it
for
would
be
with
a
mix
of
forwarding
and
tunneling,
so
that
kind
of
no
end-to-end
user
connection
would
ever
be
just
forwarded
but
kind
of
interior
links.
Where
you
don't
necessarily
have
user
data
exposed.
You
already
do.
Have
you
know
quick
protection?
So
it's
not.
I
don't
think
it's
quite
the
same
as
saying
we're
using
a
null
cipher
on
tls.
It's
saying
that
we
just
don't
need
to
wrap
up
the
tls
encryption
three
or
four
times.
C
I
I
think
we
can.
We
can
argue
this
analogy
later.
The
second
point
I
wanted
to
make
is
that
you
know
all
the
multi-hop
scenarios
I'm
familiar
with.
Are
situations
like
tor,
where
you
actually
have
where
you
actually
have
really
aggressive
threat
models
and
those
aggressive
threat
models.
C
You
have
to
use
different
different
encryption
anyway,
that
does
this,
does
it
that
has
constant
size
frames,
it
doesn't
expand
them
and
so
like,
like
there's
like
a
whole
like
sphinx
and
a
whole
set
of
whole
set
of
work
on
this,
and
so
like
again
like
I
just
like,
like
the
mtu
thing
like
I
like,
I
guess
I
mean.
O
C
C
Saying
that
these
are
a
set
of
requirements
which
one
often
sees
listed
for
having
weaker
security
and
exactly
the
sort
of
vague
mode
and
so
it'd
be
helpful
for
me
to
have
like
a
much
clearer
set
up
like
exactly
the
applications.
That
would
allow
me
to
assess
where,
like
these
were
good
trade-offs
to
make
yeah.
E
That
sounds
good
and
we
can
talk
about
that
more
in
the
future.
Certainly
yeah
I
it
is
not
the
same
as
tour.
Absolutely.
A
C
I'm
back
all
right,
it's
me
again
so
now
we're
talking
about
the
ip
proxying
requirements
document
and
the
so
the
we're
getting
close
to
the
end.
I'm
gonna
walk
us
through,
like
a
few
of
the
remaining
issues
and
then
hand
it
off
to
the
chairs
on
what
the
working
group
wants
to
go
next,
so
first
one
this
one,
I
think,
would
be
easy.
C
The
current
requirements
draft
said
we
should
support
http
2..
It
sounds
like
this
feature
is
really
important,
so
people
are
saying
no
like.
We
need
need
to
support
this.
So
there's
a
pr.
That's
a
one
word
change,
which
is.
We
must
support
h2.
Does
anyone
object
to
that.
C
So
the
the
this
is
a
clarification
one
which
was
slightly
editorial
so
on
the
support
supporting
route
negotiation.
It
wasn't
clear
to
everyone
that
this
you
know,
negotiating
routes
allows
you
to
exchange
a
default
route,
so
we
have
a
pr
that
has
a
sentence
to
make
that
crystal
clear.
Does
anyone
object
to
this.
D
D
Yeah-
and
that
was
my
question
like
if
that's
not
clear
to
me,
because
it
addresses
one
specific
use
case,
so
it
could
also
be
an
extension
as
we've
done
with
other
extensions.
C
I
mean
that
seems
pretty
like
you
can't
have
any
kind
of
tunneling
without
routes,
because
then
you
don't
know
what
you
can
send
over
it.
So
I
don't
see
that
as.
C
K
Alex
thanks
david.
I
just
wanted
to
say
that
I
think
that
this
is
a
mandatory
feature
because
of
the
other
text
in
the
ib
proxy
requirement,
which
says
that
we
have
multiple
use
cases,
some
of
which
require
routes
so
sort
of
orthogonal
to
the
payloads
versus
packets
question.
Other
use
cases
require
this,
so
I
don't
see
how
we
can
avoid
having
it
be
not
mandatory.
D
So
I
mean,
I
don't
think,
that's
a
that's
a
good
discussion
to
have
so
we
should
stop
here,
but
we
said
for
other
use
cases.
If
it's
only
one
use
case,
it
is
an
extension
not
mandatory,
and
now
we
say
because
it's
one
use
case,
it's
it's
mandatory
and
should
not
be
an
extension.
So
I
think
this
I
don't
think.
C
This
needs
anywhere,
so,
let's
just
move
on
yep,
let's
put
a
pit
in
this.
I
agreed
all
right
next
slide.
Please.
C
So
about
validating
the
address
assignment
so
the
right
now,
the
draft
has
the
requirement
for
being
able
to
assign
an
address
to
the
peer,
and
so
this
issue
is
about
validating
that
the
peer
has
authority
over
this
address
range.
This
is
a
policy
decision,
and
policy
is
out
of
scope.
So
my
pro
so
the
proposed
resolution
here
is
to
close
with
no
action.
Are
there
any
objections.
D
Support
for
it
right,
but
what?
What
like?
There's?
No
there's
a
non-requirement
for
address
assignment.
I
think
that's
like
just
too
much
so
I
would
just
remove
this
from
the
director.
C
D
C
Well,
because
we've
kind
of-
and
that
was
part
of
our
charting,
don't
forget,
of
like
these
kinds
of
policies
are
not
something
that
we
want
to
do.
You
know,
for
example,
saying
that
I'm
authoritative
over
rfc
1918
space-
I
don't
even
know
what
that
means.
So
I
again
like
this:
doesn't
I
just
really
don't
understand
what
we
would
build
as
part
of
the
protocol
here.
C
Keep
moving
go
ahead,
I'm
not,
I
guess,
I'm
trying
to
work
out
what's
at
stake
here
so
so,
as
I
immediately
just
like
ignoring
the
text
for
a
second.
As
I
understand
the
situation,
there
are
effectively
two
operating
modes
for
a
protocol
like
this
one
is,
I,
the
client
just
transmit
my
own
addresses
and
then
the
server
nets
to
whatever,
like
the
heck
it
wants
to
it.
Wasn't
that
too
right
and
the
other
is
the
server
tells
me
what
addresses
it
wants
me
to
it.
C
C
So
if
I,
as
I
understand
that
this
protocol
is
compatible,
these
requirements
are
compatible
with
I,
the
protocol
implementing,
I
would
say
either
the
second
only
or
the
second
and
the
first,
but
not
the
first
only
correct,
namely
that
that
this
this
seems
to
be
the
required
implied
requirement
here.
Is
it
must
be
the
case.
C
This
protocol
does
support
a
way
for
the
server
to
give
me
an
ip
address
range,
which
will
then
transfer
on
unaffected
output
at
the
door
right,
but
it
could
also
be
the
case,
but
if
there's
not
a
requirement
that
not
also
probably
the
other
function,
is
that
what's
a
stake
here.
C
I,
yes,
I
think
the
idea
is
that
what
transformations
happen
outside
of
mask
are
considered
out
of
out
of
scope,
let's
say,
for
example,
if
you
want
to
put
a
nat
between
what's
happening
in
the
mass
tunnel
and
what's
happening
on
the
internet,
we
just
clear
that
out
of
scope
for
mask,
I
think
that's
probably
it
well.
I
guess
what
I
guess
I'm
trying
to
get
out,
though,
is
like
is,
is
do
the
does
this?
C
Is
this
text
implying
to
say
that
the
first
thing
you
have
to
do
is
always
do
an
address
assignment
from
the
pier
or
no
no,
not
at
all,
and
if
you're
asking
what
the
text
does
might
I'd
suggest
reading
the
draft?
I
I'm
reading
that
david.
I
I
am
reading
the
text,
so
perhaps
you
know
typically.
Typically
the
response
to
your
text
isn't
clear.
Is
it's
clear?
It's
not
really
quite
the
right
answer.
Okay,
then,
if
it's
not
clear,
then
absolutely,
let's
fix
that.
Sorry
about
that.
C
D
And
so
I
should
probably
look
back
at
the
issue,
but,
like
my
memory
is
that
this
was
about
a
non-requirement
that
is
specifying
out
that,
like
any
kind
of
address,
validation
should
not
be
supported,
which
I
think
we
might.
C
C
Okay,
so
just
to
be
clear,
something
being
out
of
scope
doesn't
mean
you
can't
do
it
like
as
an
extension
or
something
it
just
means
that
we
don't
need
to
go
there.
But
might
I
suggest
we
move
on
to
the
last
issue
because
I
think
that's
the
most
important
one
that
really
needs
face-to-face
time
and
this
one
c
isn't
clearly
for
well-formed
enough
to
have
a
conversation
because
we're
all
confused.
So
maybe
mary,
if
you
could
like
add
more
text
on
the
issue
to
help
clarify
this
tommy.
C
Could
you
make
it
real,
quick.
C
All
right,
so
this
was
the
topic
we
breached
a
little
bit
at
the
previous
itf
and
it's
the
most
important
one
when
it
comes
to
ip
proxying
in
the
mask
working
group.
Are
we
proxying
ip
packets
or
are
we
proxying
ip
payloads?
One
point
is:
if
you
build
a
solution
that
proxies
ip
packets
you're
able
to
build
an
extension
that
compresses
the
packets,
which
therefore
allows
you
to
proxy
ip
payloads,
but
unfortunately
you
can't
go
the
other
way
around.
C
If
you
have
something
that
proxy
ip
payloads,
you
can't
easily
do
ip
packets,
and
I
think
this
is
the
part
where
I
hand
it
over
to
the
chairs,
to
moderate
this
discussion.
Q
Yeah
thanks
david
excellent,
so
just
to
remind
everyone
sort
of
what
what
we're
trying
to
accomplish
with
the
requirements
doc
and
the
purpose
of
this
particular
issue
that
david
just
mentioned,
we're
trying
to
scope
out
the
requirements
for
the
solution
that
is,
connect
ip.
That
will
eventually
work
on
later
on,
hopefully,
shortly
after
you
know,
this
meeting
concludes
and
at
the
next
ietf
meeting,
the
the
document
itself
is
is
purely
meant
to
to
support
the
particular
solution.
Q
It's
not
meant
to
be
published
as
an
rfc
or
anything,
so
we
don't
need
to
get
it
perfect
by
any
stretch
of
the
means
and
given
all
the
work
that's
went
into
it
so
far
and
all
the
discussion
that's
generated
eric,
and
I
think
that
the
document
has
basically
done
its
job
in
terms
of
by
helping
us
scope
out
what
should
be
in
this
eventual
solution
and
what
should
not
be
modular,
one
particular
issue,
and
that's
this,
this
fundamental
issue
that
david
just
raised
so
next
slide.
Q
Please,
and
that
is
whether
or
not
the
the
base
protocol
should
support
proxy
and
payloads
or
packets.
We
had
a
show
of
hands
during
the
last
meeting
in
which
the
outcome
clearly
suggested
that
the
support
for
proxima
packets
was
sort
of
essential,
but
less
clear
on
whether
or
not
proxy
and
payload
should
be
part
of
the
base
protocol,
and
so
this
is
something
that's
sort
of
orthogonal
to
the
actual
solution.
We'll
have
to
sort
out
this
question,
regardless
of
where
we
go
with
connect
ip.
Q
So
right
now,
with
the
ongoing
consensus,
call
on
the
list
we're
just
trying
to
answer
this
specific
question
next
slide.
Please.
Q
Ideally
we
come
out
of
this
with
consensus
in
terms
of
what
the
base
protocol
should
do
at
that
point,
we'll
consider
the
requirements
document
complete,
especially
given
the
non-controversial
issues
that
are
the
yeah,
the
simple
issues
that
david
just
went
through
regarding
issue
fallback
and
whatnot.
So
I'd
like
to
briefly
pause
and
just
ask
if
anyone
sort
of
violently
objects
to
that,
and
if
not,
I
think,
that's
that's
what
we'll.
Q
Q
Okay,
great
so
again,
please
take
a
look
at
the
consensus,
call
on
the
list,
chime
in
with
an
opinion
and
then,
when
that's
done,
we
can
make
the
changes
to
the
document.
Accordingly,
publish
a
new
version
and
then
move
on
to
the
solutions
at
the
next
itf
meeting.
Q
And
I
guess,
unless
there's
anything
else,
we're
right
at
the
top
of
the
hour,
so
we
can.
We
can
call
it
here
thanks
to
mike
for
taking
notes
and
for
david
for
speaking
for
so
long.
I
feel
like
this
is
pretty
productive
and
we
have
a
number
of
good
avenues.
Moving
forward
for
the
issue,
datagram
work,
as
well
as
the
ip
connect
ip
work.
So
yes,
and
as
eric
points
out,
please
don't
forget
to
add
your
name
to
the
blue
sheet
and
the
notes
with
that.