►
From YouTube: IETF115-QUIC-20221107-1300
Description
QUIC meeting session at IETF115
2022/11/07 1300
https://datatracker.ietf.org/meeting/115/proceedings/
B
C
E
A
F
A
A
So
some
things
that
everyone's
probably
seen
already
the
note
well
is
the
list
of
things
that
we
all
operate
under
as
part
of
the
ITF.
If
you
have
noted
not
noted
it
already,
please
do
so
so
some
other
tips
that
you
may
not
have
seen
yet
so
for
in-person
participants
make
sure
to
use
meet
echo
in
order
to
on
your
mobile
device
or
on
from
your
laptop
to
get
into
the
queue
and
for
remote
participants.
A
Make
sure
you
know
the
typical
audio
video
stuff
mute
yourself
if
you're
not
talking,
otherwise,
we
will
meet
you
and
also
please
use
a
headset.
If
you
are
talking
because
it's
audio
can
be
challenging.
A
So
administrivia
do
we
have
anybody
that
is
willing
to
take
notes.
Robin
is
not
here
at
the
moment,
and
Robin
is
usually
the
one
who
volunteers
so.
Is
there
any
other
Brave
and
hopeful
Souls.
A
It
would
be
really
nice
anybody,
okay,
Martin
needs
a
backup
because
he
will
be
one
of
the
people
presenting
is
anyone
willing
to
do
that
while
Martin
is
presenting
Eric?
Thank
you.
We
appreciate
it.
So
much
blue
sheets
are
done
via
meet
Echo
these
days.
We
truly
live
in
the
future,
and
chat
is
meet
Echo
and
we
will
be
running
the
queue
and
again,
please
do
get
into
the
queue
or
at
least
attempt
to
get
in
the
queue
locally
via
the
meet
Echo
interface,
so
agenda.
A
So
we're
gonna
have
a
very
brief
chair
updates.
Then
we're
going
to
get
into
our
working
group
items
which
consist
of
multi-path
acknowledgment
frequency,
q,
log
and
quick
lb
and
then
as
time
permits
after
that,
we
will
be
talking
about,
and
Ian
is
laughing
at
the
agenda
which
we
will
talk
about
in
a
second.
After
that
we
will
get
to
Reliable,
quick
stream,
resets
and
quick.
Assuming
we
have
time
which
we
may
does
anybody
want
to
bash
this
agenda.
Martin.
G
I
just
want
to
use
the
opportunity
to
chill
for
the
congestion
control
working
group
side
meeting
that
will
occur
at
5
pm
on
Thursday,
since
that
seems
relevant
to
people
in
this
room.
Thank
you.
Yes,.
A
H
A
All
right,
that's
very
useful,
bashing
saves
us
the
Innocence
Ghana's,
not
here
we
will
just
kind
of
play
that
by
ear
and
maybe.
A
Okay
and
with
that
I
think
Quentin,
are
you
ready
to
present?
H
A
I
forgot
about
our
updates,
our
share
updates,
which
will
be
very
brief,
so
applicability
and
manageability.
They
are
rfcs
yay,
quick,
brief
bit
also
an
RFC.
So
these
are
these.
This
forward
progress
and
then
version
negotiation,
V2
not
quite
rfcs,
but
have
made
it
through
the
isg.
So
thank
you.
Everyone
for
your
contributions
to
these
documents
on
the
continued
forward.
Progress
of
the
working
group.
I
And
I
was
going
to
put
one
more
bullet
point
on
this
slide,
which
I
forgot
to
do.
If
you
recall
a
while
back,
we
had
the
the
quick
LB
draft
that
contained
two
concepts
and
a
little
while
ago
we
split
those
drafts
off.
I
So
we
have
quick
lb
focus
on
load
balancing
and
we
have
the
retry
offload
draft
we've
been
speaking
to
people
in
back
channels
just
to
understand
the
implementer
interest
in
that
proposal
and
we're
not
seeing
much
or
if
any
right
now
it
seems
like
a
decent
piece
of
work
is
what
we're
hearing,
but
our
plan
is
to
effectively
put
that
one
in
the
Deep
Freeze
for
a
while.
So
just
look
out
for
updates
on
the
data
tracker.
I
A
So
just
another
thing
in
case
anyone
was
aware:
mask
is
happening
on
Wednesday,
that's
relevant
to
to
many
people's
interests
and,
of
course,
media
over
quick.
The
new
hotness
with
quick
is
happening,
Thursday,
Session
One.
I
Now
sorry
Yep,
this
is
great
I've
been
busy.
I've
got
excuses,
but
no
excuses.
I
want
to
try
these,
like
the
normal
people
who
come
to
the
ITF
are
familiar
with
these
working
groups.
We
have
separate
working
groups
of
good
reasons,
but
what
some
of
them
might
be
touching
on
are
things
like
extensions
or
inputs
and
use
cases
into
things
that
those
groups
would
like
from
the
quick
transport
protocol.
I
So
if
you
don't
normally
attend
those
groups,
it
might
be
within
your
interest
to
go
and
see
how
people
are
using
quick,
and
maybe
the
sorts
of
Transport
features
that
they
they
would
like
to
bring
back
to
us,
which
is
within
our
Charter.
So
is
that
the
final
slide?
Okay,
we're
done.
Let's
get
out
the
way.
J
All
right
so
I'm
Quentin
and
on
behalf
of
the
multi-pass
quick
draft
authors,
I,
will
present
you
the
update
since
the
last
ITF,
so
next
slide
and
actually
the
updates.
Since
the
last
ITF
are
quite
small.
It's
minor
changes
mostly
and
mostly
editorial,
so
the
two
first
bullet
points
I
will
discuss
them
a
little
more
in
the
coming
slides.
The
third
point
is
about
removing
a
narrow
code
point
that
was
not
useful
because
it
was
an
error
due
to
a
wrong
transfer
parameter
value.
J
So
we
can
just
say
it's
an
invalid
transform
parameter
code
point,
and
so
we
can
get
rid
of
useless
error
code
and
there
are
additional
editorial
PR
that
were
merged,
but
these
are
mostly
adding
an
entry
to
a
table,
fixing
some
commas
and
so
on.
So
nothing
really
interesting
to
discuss
here
so
next
slide.
J
So
one
of
the
contributions
in
the
last
time
is
adding
some
consideration
for
single
packet
number
space
to
do
some
rtt
measurements.
J
So
the
issue
that
we
have
with
single
pack
and
workspace
with
the
recovery
algon
that
as
defined
by
a
quick
recovery,
is
that
some
bars
rtt
might
not
never
get
updated
because
Max
frame
may
may
come
from
other
parts
and
some
paths
might
just
not
get
any
RCT
updates
and
you
only
update
your
rtt
sample
only
if
the
largest
acknowledge
field
increase,
and
sometimes
the
RCT
actually
corresponds
to
the
sum
of
two
different
one-way
delay.
J
So
you
do
not
have
a
clear
view
of
the
rtt,
and
so
the
Alibaba
Falls
made
some
experiments
and
so
they're
really
conclusive
and
basically
their
solution
are
in
two
points.
Either
you
use
the
time
assumption
as
defined
by
the
Christian
draft,
or
you
have
job
specific
heuristics
at
both
the
sender
and
the
receiver
of
the
act.
J
So
at
the
center
of
the
ACT
you
write
past
specific
lectures
acknowledge
and
at
the
receiver
site
you
update
you,
you
consider
rtt
Sample
When,
you
increase
the
largest
acknowledge,
and
so
maybe
once
they
want
to
add
something.
K
Oh
yes,
I
yeah
I
want
to
add
a
few
point
about
our
experiment,
I
think
in
our
experiment.
K
One
goal
we
try
to
achieve
is
to
get
the
single
packet
number
space
as
accurate
for
RTP
management
as
multiple
packet
number
space,
but
in
doing
so,
we
in
the
end
made
some
trade-offs,
for
example,
as
multi-pass
quick
allows
the
ACT
to
be
returned
from
either
passes,
but
in
order
to
get
more
RTD
measurements,
we
end
up
having
the
ACT
elicit
and
returned
from
the
same
pass
and
also
we
also
modify
the
largest
acknowledgment
field
in
the
XMP
field,
which
is
not
standard,
but
so
so
we're
wondering
if
anyone
has
the
better
solution
to
get
more
rtt
measurement.
G
Martin
Duke
Google
am
I
misremembering.
Didn't
we
get
rid
of
single
packet
number
space
in.
J
Yeah
last
time
so
nearly
but
we
are
going
to
discuss
that
in
the
remaining
of
my
slides.
We
are
going
to
discuss
if
we
are
going
to
keep
single
pack
under
space
So.
Currently
we
have
a
unified
solution
where,
if
you
are
using
non-zero
connection
IDs,
you
are
using
multiple
packet
number
of
space,
but
if
you
use
the
Roland
connection,
IDs
then
use
single
packet
number
space
and
with
some
some
considerations
and
yeah.
A
You
have
a
slide
that
talks
about
the
single
First
Multiple
because
that
may
be.
Maybe
we
should
just
talk
about
that
now.
I
guess,
there's
one
this
one
reminder:
oh.
J
Yeah
I
can
quickly.
H
J
We
can
talk
about
that
now,
so
so
the
difference
between
SQL
packet
and
more
space
and
multiple
packaging
space
is
that
in
second
packet
number
space.
You
have
multiple
paths
and
you
have
the
application
data
packet
number
space
and
you
spread
packet
numbers
across
the
path,
but
you
keep
the
same
sequence.
So,
for
instance,
you
can
send
one-on-one
pass
then
number
two
on
the
second
pass
number
three
on
a
different
path
and
so
on,
while
in
multiple
packet
number
space,
you
have
a
number
of
space
for
each
path.
So
you
get.
J
You
have
number
one
for
a
given
path
and
you
have
number
one
for
a
different
front
pass
and
to
acknowledge
buckets
from
different
paths.
We
have
to
include
an
additional
frame
which
is
called
akmp,
which
includes
the
pass
identifier
you
want
to
acknowledge.
So
in
the
single
packing
workspace,
you
acknowledge
packet
with
the
regular
Arc
frame
while
in
multiple
packet
number
space.
You
use
the
icmp
frame
to
acknowledge
Market
from
different
paths.
J
Yeah
yeah,
so
that's
basically
what
I
just
said
about
the
current
state.
So
the
current
state
is
that
currently
we
have
mandatory
support
for
multiple
packet
number
space
when
we
have
known
the
online
connection
IDs
and
we
have
optional
support
for
zero
line
connection
IDs,
which
means
you
are
using
single
packet
number
space
and
you
have
to
adapt
different
elements
in
Construction,
Control
rtt,
like
the
consideration
we
just
discussed
before
and
if
honest
does
not
want
to
support
a
single
packet
number
of
space.
J
It
just
has
to
restrict
to
sending
only
on
one
path,
but
it
has
the
the
sending
us
that
to
enforce
that,
you
can
only
send
on
one
pass
yeah.
So
this
is
one
of
the
issue.
Actually
we
wanted
to
raise.
So
do
we
want
to
keep
a
single
packet
number
space,
so
we
made
some
evaluation
reports
both
from
Alibaba
and
from
myself,
and
we
had
a
few
key
findings.
J
So
the
first
one
is
that,
given
the
RTD
change
heuristic,
which
was
described
before
we
can
have
rtt
samples
which
are
quite
correct,
but
even
without
losses.
We
notice
that
in
single
packet
number
space
you
can
have
huge
pack
holes
and
if
you
limit
the
number
of
acquaint
you
advertise
in
a
frames,
actually
you
can
decrease
the
signal
packet
number
space
performance
also
as
a
receiver.
J
Typically,
you
keep
a
lowest
packet
number
that
you
consider
to
keep
some
state
to
track
some
rough
packets
and
lower
like
a
number
space
than
the
one
that
you
keep
you
consider
that
it's
duplicate
and
so,
but
you
consider
them
as
being
lost
and
if
you
do
not
change
that
heuristic
in
the
address
of
our
side
when
doing
multiple
single
particle
customer
space,
what
happens
is
that
you
might
miss
a
valid
packets,
and
so
you
might,
you
might
just
drop
incoming
packets
at
the
receiver
site
and
overall,
although
of
the
these
points,
that
we
just
described
is
that
on
large
transfer,
typically,
the
performance
is
typical.
J
Packet
number
space
is
lower
than
multiple
packet
number
space,
because
you
have
acknowledgment
issues
and
we
have
spruces.
So
that
comes
to
my
first
question
next
slide.
So
the
first
question
is:
do
we
have
no
any
strong
use
case
that
requires
zero
line
connection
IDs
with
multipass
in
the
basic
in
the
basic
draft.
L
Fast
way,
so
I
actually
have
a
an
argument
against
having
any
use
case
for
this,
and
that
is
the
Privacy
concern
that
we
have
in
RFC
9000,
which
says
that
an
endpoint
should
not
easily
migration
with
a
peer
that
has
requested
a
zero
length
connection
ID,
because
traffic
over
New
Path
might
be
trivially
linkable
to
traffic
over
the
old
one.
I
think
this
might
apply
to
multipath
quick
using
zero
connection
ID
and
if
that's
the
case,
I,
don't
think
we
are
even
allowed
to
encourage
or
I
mean.
M
N
So
you're
exciting
text
from
the
security
section
right,
secure,
okay,
so
I
I
did
look
up
this
text
because
I
was
working.
I
was
trying
to
figure
out
how
to
get
rid
of
single
packet
number
space
right
and
I
was
looking
the
part.
The
part
about
migration,
so
the
RFC
9000
actually
does
allow
migration
without
connection
ID.
N
It's
kind
of
you
may
think
right,
I
mean
there
is
a
secure
consideration,
but
to
be
in
line
with
RFC
3000
kind
of
like
we
should
probably
allow
it,
because
that's
what
the
base
back
is
and
that's
the
decision
to
take.
If
we
can
actually
restrict
this
more,
then
it's
currently
restricted
in
the
base
pack.
L
I
think
that's
a
fair
argument,
so
I
think
the
question
here
is:
if
we
want
to
spend
our
effort
specifying
something
that
will
say
that
should
not
be
used.
N
Maybe
go
to
your
next
slide
so,
but
the
the
the
the
thing
that
would
actually
make
this
spec
very
easy
would
be
to
say
it
must
not
be
used,
and
that's
actually
something
I
would
like
to
figure
out
today.
J
Yeah,
so
actually
it
comes
to
the
second
question
that
we
wanted
to
to
discuss
is:
should
we
keep
support
for
single
packet
number
space
at
all
in
the
base
rafts?
So
we
have
two
kind
of
solution.
The
first
one
is
to
keep
the
draft
as
it,
so
we
just
keep
a
single
packet
number
of
space
report
for
the
online
connection
ID.
Otherwise,
we
use
multiple
packet
number
space
for
a
non-zero
line.
J
Connection
IDs
and
the
other
solution
is
just
to
say
just
drop
single
packet
number
space
and
we
keep
only
multiple
packet
number
space
solution
and
with
that
variant
we
have
two
subvariants,
the
first
one.
We
can
just
work
with
the
initial
path
that
may
work
with
with
zero
length
connection
IDs.
But
if
you
want
to
use
additional
paths,
we
have
to
have
non-zero
line
connection
IDs,
but
you
can
provide
afterwards.
J
O
O
N
To
make
this
more
concrete,
so
the
the
way
also
I,
think
we
could
remove
a
lot
of
text
from
this
draft
would
be
to
say
if
in
the
handshake
multi-pass
is
requested-
and
there
is
no
connection
ID
in
the
in
the
handshake
packets,
then
you
must
not
negotiate
multiple
support
right,
like
only
if
there's
a
connection
ID
for
the
first
press,
then
you're
allowed
to
do
multipass,
support
or
negotiated
even
right.
So
that
would
be
kind
of
the
thing.
N
That
would
be
the
easiest
thing
but
like
in
line
with
the
current
draft,
like
you
could,
actually
you
could
say:
I
I
have
implemented
multi-pass
and
I'm
willing
to
use
it,
but
even
if
you
have
don't
have
a
connection
ID
on
the
first
pass
because,
like
I,
don't
even
know
if
you're
using
multiple
passes
later
on
or
whatever
I'm
still
willing
to
tell
you
that
I'm
multiple
capable
and
then
we
figure
out
what
to
do
later,
and
that
is
that
is
even
more
complicated,
because
in
that
case
you
can
use
the
first
part
without
a
connection
ID.
N
So
that
might
actually
enable
some
use
case.
I
don't
know,
and
then
you
have
to
use
the
connection
ID
on
the
on
the
following
passes
and
that's
all
all
fine,
but
then
you
also
have
to
kind
of
support
migration
for
the
first
path
without
connection
ID
to
a
different
path,
or
you
effectively
leave
the
door
open
even
for
like,
if
you,
if
you
receive
a
connection,
a
packet
without
a
connection
ID
on
a
different
path.
N
M
Foreign,
so
as
we
were
talking
about
in
the
last
meeting,
as
you
know,
someone
who's
actually
looking
into
implementing
and
deploying
multi-path
right
now,
I,
don't
think
we
are
even
going
to
support
zero
line
connection
IDs
like
at
all.
M
So
if
it's
I
guess,
if
there's
someone
who
wants
to
have
that
supported
like
they
should
probably
speak
up
now,
because
I
think
most
deployments
are
not
going
to
support
this
at
all,
and
it's
probably
better
to
just
remove
the
support
for
a
single
bucket
number
spaces,
or
even
like
completely
for
zero
Lang
cids
from
the
draft.
A
Okay,
so
would
the
authors
be
happy
with
a
potential
poll
asking
the
question?
Should
we
remove
zero
length,
CID
slash
a
single
packet
number
space
from
the
current
document,
or
is
that
not
sufficiently
clear?
J
Yes,
yeah
just
a
point
is
that,
as
an
editor,
the
document
wouldn't
be
much
simpler
and
much
more
readable
if
we
keep
only
one
variant,
because
currently
we
have
a
lot
of
points
where,
if
you
are
using
multiple
backend
workspace,
do
that
if
you
are
using
single
pack
and
enough
space
to
that.
So
as
an
editor,
it
is
simple
if
we
only
keep
one
variant,
Alex.
I
P
Google,
just
over
lunch,
I
was
discussing
with
David
schenazi
a
potential
alternative
design
that
was
sort
of
similar
to
multiple
packet
number
spaces
and
I
was
wondering
if
it
had
been
considered,
which
is
what,
if
we
did,
multiple
quick
connections
possibly
steered
to
the
same
server
using
the
encrypted
connection.
Id
draft,
the
Martin,
Duke
and
Co
are
working
on
and
we
just
bonded
the
connections
together
because
in
many
ways
that's
really
similar
to
what
we're
doing
and
I
feel
like.
P
Foreign
sorry
Miriam
is
asking
if
we
can
explain
a
little
bit
more
so
basically,
what
I'm
thinking
is.
It
looks
like
we
have
two
ways
of
doing
this
at
the
very
least
one
of
which
is,
we
can
add
this
support
intrinsically
into
each
individual
connection,
or
we
could
bond
multiple
connections
together
that
have
their
own
individual
paths
and
I'm,
mostly
just
asking
if
we
had
considered
those
trade-offs,
because
it
seems
like
we're
making
a
lot
of
choices
right
now,
adding
complexity
into
a
connection
that
I
wonder
if
it
could
be
handled
at
a
different
level.
O
Yeah,
so
Eric
can
your
Apple.
It's
certainly
worth
talking
about
cool
stuff,
so
I
don't
want
to
like
gut
feeling
reject
things,
but
one
of
the
important
parts
about
multi-path
I
think
for
us
is
the
ability
to
have
a
stream
that
is
split
across
multiple
paths,
and
so
once
you
lose
the
like.
O
This
is
a
logical
bite
stream
that
is
going
to
be
delivered
in
order
now
again
in
order
is
interesting
when
they're
coming
in
from
different
paths,
but
part
of
the
point
is
to
be
able
to
take
being
part
way
through
some
stream
and
and
have
the
rest
of
the
stream
go
over
a
different
path.
If
you
start
trying
to
bond
to
different
quick
connections,
I
think
you
lose
that
property.
P
Yes,
it
would
definitely
have
to
be
able
to
meet
at
a
different
level
yeah,
but
but
I
guess
that's
effectively
the
question
I'm
asking
is:
do
we
want
an
abstraction
inside
of
a
quick
connection
to
deliver
this
up
or
do
we
want
an
obstruction
on
top
of
multiple
quick
connections
to
deliver
this
because
I
feel
like
you
can
get
the
same
effective
property,
but
is
it
makes
the
application
look
the
same
way?
H
Ian,
sweat,
Google
can
I
get
it
now
sure.
Okay,
I
I,
definitely
discussed
this
at
some
point
with
someone
and
I
have
no
idea
who,
but
but
I
I,
think
I
think
the
key
so
number
one
I
think
it's
a
very
good
idea
to
really
consider
and
like
walk
through.
H
Basically,
what
I
think
it
requires
is
like
a
really
clean
split
between
what
I
might
call
the
connection
layer,
which
is
like
loss,
recovery,
loss,
detection,
congestion
control,
the
crypto
handshake
like
and
all
that
stuff
and
the
session
layer
which
sits
on
top
of
it,
which
is
like
streams,
reliability
and
all
of
these
things
I
think
it's
a
very
doable
concept
but
I
think
in
order
to
get
good
consideration.
H
Probably
something
should
happen
like
Alex
and
I
should
sit
down
for
three
hours
and
like
write
up
a
variation
like
short
draft
about
like
sketching
out
how
it
might
work,
and
then
you
can
all
like
basically
either
say
it
like
seems
like
a
reasonable
designer
like
this
is
like
not
gonna
work,
because
it
like
has
all
these
holes.
Does
that
seem
like
a
reasonable
Way
Forward.
I
I
So
if
that's
something
you
want
to
go
away
and
write
up
and
maybe
email
to
the
list
feel
free
to
do.
That,
I
would
like
to
remind
people
that
we
did
spend
a
long
time
to
get
to
the
point
where
we
were
comfortable
with
adopting
multi-path,
quick
and
the
feature
set,
that
it
would
provide
to
people
like
stream
related
that
I
just
mentioned.
I
So
you
know
I,
don't
want
us
to
we're
trying
to
hone
down
from
two
options
to
one
right
now,
and
this
sounds
like
we're:
adding
a
third,
maybe
maybe
I'm,
reading
wrong,
but
like
it's,
it's
adding
complexity
to
the
question,
and
my
concern
is
that
we
we
halt
progress.
While
we
wait
for
to
figure
out
some
more
complex
things.
H
I
would
argue
we
should
kill
the
existing
single
packet
number
space
design
regardless
today,
which,
like
I,
don't
think
that's
on
the
table.
From
my
perspective,
like
I,
think
it's
either
like
we
either
do
multiple
packet
number
space
or,
like
you
know,
in
a
week
or
a
few
weeks,
we
decide
that
we,
like
this
other
thing,
better
I,
don't
think
we
should
keep
single.
I
Okay,
that's
good,
but
I
would
just
say,
go
away
and
work
on
your
thing
and
if
you
come
back
with
something
we
can,
we
can
review
at
that
time,
but
the
for
the
the
editors
of
multi-path
quick,
take
the
outputs
from
this
meeting
and
and
continue
working
as
as
hard
as
you've
been
doing
thanks.
We
have
more
people
in
the.
A
D
Would
yes,
hello,
everyone
so,
first
to
the
question
on
the
slide,
I
agree
with
pretty
much
everyone
else.
That's
spoken
so
far
to
say
that
we
should
probably
just
keep
the
multiple
pick
a
number
of
space
and
kill
the
single.
That
is
fine.
D
Regarding
this
other
Direction
I
think
you
know
if
you
want
to
go
off
and
write
something
up
and
have
a
new
draft.
That's
about
Blended,
quick
connections
that
you
do
at
a
application,
Level
and
if
you
have
use
cases
for
it
great,
go
ahead
and
do
that
I
do
not
think
that
is
multi-path,
quick,
that's
a
separate
thing,
and
certainly
there
are
applications
that
may
not
want
to
use
multipath
Quick,
but
there
are
applications
that
want
to
use
it
at
a
transport
level
and
I.
Don't
think
we
should
confuse
those
things.
D
I
think
it's
fine
to
have
multiple
techniques
that
can
get
use
of
different
paths
and
different
available
networks,
but
having
a
transport
level
function
for
this,
that
is
equivalent
to
mptcp
and
its
capabilities,
but
also
allows
you
to
have
different
streams
scheduled
over.
D
It
is
highly
useful
and
the
work
that's
done
on
this
draft
is
extremely
good
and
a
very
good
basis
for
things
that
shouldn't
prevent
other
Innovations
around
stream
bonding
at
a
higher
level,
but
I
I
would
strongly
object
to
having
another
design
be
put
forward
to
replace
in
this
work
and
then
stop
working
on
this,
because
this
is
an
important
basis
for
work.
Q
R
We
did
in
fact
look
at
that
very
early
in
the
discussion
of
a
quick
20
pass.
I
mean
there
was
a
lot
of
discussion
and
it
was
hey.
We
can
do
a
lot
of
the
scenarios
by
having
multiple
connections
instead
of
just
one.
R
We
never
did
the
have
multiple
connection
and
and
bring
them
together,
because
I
think
that
brings
all
the
complexity
of
multipass
the
Practical
complexity
of
implementation
and
then
add
some
plus.
You
have
to
do
multiple,
complete
handshakes
and
all
that
kind
of
thing,
so
I
mean
I'm
all
for
letting
a
hundred
flow
up
room
and
have
someone
study
that
if
they
want
to
but
I
would
certainly
not
like
stopping
the
current
discussion.
R
As
for
the
question
on
the
screen,
I
support
the
consensus
of
the
Quick
photos
that
we
should
basically
keep
the
drive,
simple
and
just
say:
hey.
We.
R
Q
C
Saturday,
mock,
Enthusiast
and
I
just
want
to
point
out
I
think
I,
I
very
much
agree
with
what
Eric
was
saying
before
about
the
ability
to
have
a
stream
be
across
two
different
paths,
being
an
important
use
case
and
I
think
it's
particularly
an
important
use
case
for
some
of
the
mediocre.
So
if
that's
something
that
the
group
is
going
to
con,
continue
along
that
path
and
possibly
add
another
path
for
this
bonded
thing,
then
that
that
seems
fine
see
what
the
bonded
people
come
up
with
and
consider
it
separately.
C
But
I
think
it
would
be
very
useful
to
consider
separately
media
over
quick
is
not
yet
talking
about
multi-path,
but
I
think
it's
presumptions
about
what
it's
going
to
get
from
multipath,
quick
or
very
much,
based
on
an
idea
that
it's
presenting
you
kind
of
a
single
interface
rather
than
a
bonded.
One.
K
Yeah,
so
at
the
beginning,
we
have
also
considered
about
the
connection
boundary
but
I.
Think
the
benefit
of
multi-pass
is
not
just
about
better
with
aggregation.
It's
also
about
the
ability
to
schedule
at
the
packet
level
so
that
you
can
adapt
to
Natural
variation
very
very
quickly.
That's
one
of
the
big
advantages
that
the
current
draft
offer
offers
you.
So
this
is
my
first
comment.
K
The
second
comment
is
that
the
for
a
single
pack
number
Space
versus
multiple
packet
number
space,
so
at
Alibaba
our
use
cases
we
use
multiple
packet
number
space,
so
I
think
if
we
just
keep
one
the
multiple
academics
base,
it
will
definitely
simplify
things.
A
lot.
S
N
Yeah
I
went
last
in
the
queue
because
I
just
want
to
check
quickly
to
wrap
up
this
conversation
so,
like
I,
think,
there's
broad
support
to
get
rid
of
single
packet
number
space,
just
to
confirm
that
this
means.
What
we
will
say
is
a
server
must
not
negotiate
multiplier
support
if
there
is
no
connection
ID
in
the
initial
packet
right.
So
just
everybody
has
the
same
understanding.
A
Yes
and
I
think
we
will
do
a
quick
poll
just
to
confirm
what
seems
to
be
broadly
supported
just
in
case
there's
anybody
who's
holding
back
their
disagreement.
So
the
question
here
is
basically
what
Myriad
is
going
to
say,
which
is:
should
we
remove
zero
length
connection
IDs
from
the
current
document?
So
raise
your
hand
if
you
think
that
is
true
or
vote
in
me,
like
I,
guess
virtual
raise
your
hand.
A
Okay,
so
that
is
fairly
convincing,
I
would
say
in
terms
of
the
support
for
removing.
There
was
someone
in
the
Cuba.
E
I
So
so
we
in
the
mitako
attendance
we
have
165
people
reported.
We
reported
from
the
the
show
of
hands
question.
Should
we
remove
zero
length,
SIDS
Slash
spns
from
the
current
document?
53
people
raised
their
hand.
Two
people
did
not
raise
their
hand.
I
Sorry,
just
to
preempt
what
Matt's
going
to
say
here
because
we're
on
the
same
wavelength.
We
will
take
this
input,
we'll
I
think
the
answer
is
pretty
clear,
but
we'll
take
this
to
the
list
to
confirm
consensus.
So
do
look
out
for
that
email
post!
This
meeting.
A
A
A
Okay,
what
what
slide
would
you
like
to
continue
from
at
this
point,
I
still.
J
Okay,
so
just
talking
about
the
updates
coming
back
from
the
open
date,
we
met
it
in
the
last
ITF,
there
were
some
willingness
from
entreplementers
to
know
how
they
should
map
The,
Keeper
life
mechanism
of
quick
single,
pass,
quick
to
multiple
paths,
and
so
we
added
some
text
in
the
implementation.
Consideration
saying
that
in
the
end,
the
scheduling
you
want
to
apply
to
keep
your
key
lab
keep
alive
will
depends
on
the
application
that
you
will
be
serving.
J
O
Sorry
before
we
get
too
far
off
keep
alive,
is
it
worth
having
text
in
there
that
states
that
that
essentially
allows
either?
And
you
end
up
with
the
minimum
of
either
side's
choices
right
for
any
path?
Is
that
something
that
we
want
to
explicitly
state
to
make
it
clear
that
between
you
and
whoever
you're
talking
to
if,
if
I
decide
top
one,
two
and
three
needs
to
stay
alive
and
the
person
I'm
talking
to
wants
four
five
and
six,
which
is
an
awful
lot
of
pads?
N
So
effectively,
this
is
an
interface
question
right.
It's
like
how
you
like,
if,
if
you
interface
said
just
send
people
lives
and
doesn't
give
you
any
guidance,
then
I,
don't
know
what
you
do
if
the
interface
says
send
keeper
lives
on
plus
X.
N
You
know
that
it's
an
application
decision
but
like
I'm,
actually
I'm
I'm,
not
I,
mean
like
for
for
the
term
keeper
life
I,
find
it's
actually
really
confusing
because
keep
alive
means
like
I,
still
have
a
connection
to
the
other
endpoint
on
any
on
One
path,
right,
so
I
kind
of
need
to
Ping
all
the
passes
to
make
sure
that,
like
there's
at
least
one
part
of
the
life
and
and
I
I
would
actually
like
to
Define
this
more
strictly,
rather
than
leaving
it
as
an
interface
question.
N
O
I
think
it's
fine
to
say
that
it's
not
our
business
here
and
as
we
actually
get
more
experience
with
it,
we'll
probably
bring
an
extension
that
introduces
some
more
explicit
signaling
around
hey
I.
Think
that
path,
two
is
the
one
I
really
want
us
to.
You
know
care
about
the
most
and
just
ping
on
that
one
or
whatever,
like
we've,
also
got
a
couple
of
cases
where
we'd
like
to
let
most
all
the
paths
go
away
and
be
able
to
re-establish
them
later.
So
it
seems
like
there's
a
lot
to
unpack
here
and
I.
O
N
I
mean
yeah,
probably
it's
a
good
idea
to
leave
it
open,
but
I'm
wondering
if
there's
any
use
case
where
it
would
actually
say.
Please
only
ping
past
who,
because
usually
the
use
case
is
please
pick
out
if
there's
connectivity
right.
T
A
T
Gemma
angle,
fastly
so
to
the
question
of
it's
I,
think
it's
very
tightly
tied
to
the
question
of
scheduling
and
what
paths
do
you
want
to
use?
When
do
you
want
to
use
them?
What
is
the
purpose
of
the
multiple
parts?
And
that
is
really
an
application
and
context
question
in
this
particular
case?
Just
because
you
have
multiple,
let's
say,
for
example,
you
have
a
cellular
and
a
Wi-Fi
path.
T
N
T
A
U
Okay,
please
tell
me
if
this
is
taking
us
into
the
weeds
too
much,
so
one
of
the
things
with
with
quick
right
that
I
really
liked.
Is
that
you're?
Not
real?
You
don't
need
keep
alive
as
much
as
you
think
you
do,
because
we
have
zero,
rgd
resumption
right
and
and
there's
a
consideration
here
that
that,
if
you're
from,
if
you're,
coming
from
like
old
style
protocols
where
you
needed
to
keep
the
thing
alive,
because
otherwise
we
had
this
gazillion
overhead
handshake
again,
you
don't
have
that
with
quick
right.
U
T
There
you
go
yeah
so
so
what
large
said
is
actually
really
important
here
as
well.
In
this
particular
case,
you
know:
I
haven't
I,
haven't
looked
at
the
multi-part
draft
recently,
so
you'll
have
to
tell
me
when
I'm
wrong
about
this.
If
you
drop
the
path,
is
there
actually
a
latency
penalty
to
be
able
to
join
or
add
a
path
backup?
T
R
Say
pretty
much
what
Jenna
says
I
mean
it's
silly
to
believe
that
hey,
we
will
need
to
keep
life
forever
and
that
we
we
know
today
what
implementers
will
learn
tomorrow.
We
have
a
tenancy
in
some
dwarfs
to
be
very,
very
specific
about
what
implementers
will
do.
I
would
much
rather
leave
the
current
text.
As
is
and
say,
hey
you're,
making
decision
make
them.
A
N
I
just
want
to
clarify
or
ask
I,
don't
think
we're
currently
saying
that
you
can
actually
reopen
a
path
without
path.
Validation,
if
you
have
used
it
previously,
because
we
don't
really
hold
State
about
what
passes
weather.
So
we
always
require
past
validation.
Is
that
a
requirement
that
you
no.
K
Oh
I
just
want
to
comment.
There
is
a
also
another
penalty.
If
you
reopen
the
past,
that's
the
congestion
window,
where
it
needs
to
be
reset
and
and
also
I,
think.
The
reason
why
we
have
this
text
here,
because
we
in
our
in
our
deployment
experience
we
do
see
the
problem.
K
If
you
don't
keep
sending
keep
pass
a
live,
a
pin
because
for
the
multi-pass,
sometimes,
if
you
use
a
main
RPG
scheduler,
you
keep
sending
a
package
through
the
shortest
pass,
while
the
other
path
remains
idle
for
some
time
and
and
otherwise,
if
you
want
to
use
that
path,
you
send
package
on
that
pass
and
you
find
some
black
hole
issue.
So
that's
what
we
have
observed
in
our
experiments.
So
that's
why
I
I
I
think
we
want
to
add
some
text
here.
A
T
M
O
I
would
suggest
that
we
move
on
to
the
next
slide.
A
It
sounds
like
there's
a
lot
of
thoughts
on
this,
and
so
I
would
encourage
people
that
have
opinions
on
this
to
either
somebody
take
it
to
the
list
or
perhaps
comment
do
some
hashing
out
on
the
issues
involved,
because
I
don't
think
any
of
this
discussion
was
has
been
reflected
there.
Yeah.
N
O
There's
a
whole
bunch
of
stuff
to
unpack
there
around
both
bringing
up
the
congestion
window
like
if
I've
got
a
path.
That's
been
idle
enough
that
I
needed
keep
alive,
so
we've
probably
dropped
the
window
already
anyway,
and
you
can
actually
use
application
data
in
your
path,
validation
to
help
bring
the
window
up
sooner
and
and
use
that
to
prime
it.
T
I
agree
with
I'm
happy
to
be
in
a
separate
space
to
discuss
this.
However,
I
want
to
note
that
there
are
things
like
this,
which
should
basically
be
the
same
considerations
as
we
did
for
connection
migration.
We
shouldn't
reinvent
the
draft.
The
the
RFC
language
here
in
this
draft
I
would
like,
like
us
to
reuse
the
work
that
we
already
did
for
the
RFC
here.
F
A
I
think
we
can
move
on
from
this
for
now
and
perhaps
we'll
we
need
a
new
issue,
possibly
a
mailing
list,
discussion
and
also
people
talking
here,
while
they
can.
What
do
you
want
to?
Let's
try
to
six
jumping
around.
J
Okay,
so
about
one
of
the
remaining
issue
that
we
recently
found
is
what
about,
if
you
receive
a
pass,
abandon
with
a
known
bus
identifier
and
when
this
can
arise
is
with
the
following
situation,
where
you
have
a
host
I
want
to
close
a
path
and
with
the
bus
abandon
you
can
identify
the
path
with
the
connection
ID
used
by
the
peer
to
send
packets,
so
the
destination
connection
ID
used
by
the
peer,
and
so
you
provide
the
sequence
number
in
the
bus
abandon,
but
before
it
reach
the
other,
the
other
side,
it
might
rotate
its
connection
ID
and
send
okay
I
return,
the
connection
ID,
and
so
it
forgets
everything
about
that
connection,
ID
and
so,
which
means
that
the
receiver
cannot
map
the
ID
anymore
to
a
path.
J
And
so
we
have
an
issue.
So
the
current
proposal
is
just
to
say
that
the
center
of
the
retired
connection
ID.
So
when
you
retire
connection
ID,
you
just
keep
for
a
while
some
State
saying
that
that
ID
was
mapped
to
that
given
path
and
so
that,
even
if
we
have
returned
the
ID,
you
can
process
the
plus
abundant.
And
if
you
cannot
process
the
path
ID
that
you
have
in
the
past
abundant
the
other
solution
is
just
to
say:
okay,
I
cannot
no
I,
don't
know
which
path
you
want
to
close.
J
So,
let's
close
the
connection,
because
you
can
have
a
pass
with
a
middle
open
State,
and
so
this
is
probably
not
what
we
want
to
have
on
the
connection.
N
N
J
N
N
R
It
says:
I
don't
want
to
receive
data
on
that
particular
five
double,
and
so,
if,
if
you
want
to
not
receive
it
and
you
you
send
a
message
and
you
see
that
the
connection
has
changed
or
whatever,
but
basically
you
keep
receiving
data
because
the
the
peer
has
not
in
fact
about
in
the
past,
then
at
that
point
you
we
send
one
or
you
drink
the
connection,
but
it's
it's
really
simple:
I
mean
you,
you
don't
have
to.
R
A
V
So
so
Martin
Thompson
I've
I've
always
been
a
little
bit
uncertain
about
this
whole
idea
of
taking
the
sequence
number
for
the
connection
IDs
and
using
them
to
identify
Parts.
It
seems
like
if
either
endpoint
changes
their
mind
about
which
connection
ID
they
want
to
use
then
you're
in
a
situation
where
you're
effectively
changing
the
identifier
for
the
path
that's
involved,
and
so
for
something
like
path,
abandonment.
You
can
abandon
well
effectively
every
connection
ID
that
you
generate
generates
A
New,
Path
identifier,
it's
asymmetric.
V
V
You've
been
economizing
in
terms
of
how
you
you
do
this
identification,
because
you
do
really
want
to
use
the
connection
ID
as
a
proxy
for
this,
but
it
seems
to
me
like
there
is
an
extra
level
of
indirection
necessary
in
order
to
get
something
like
path
abandoned
to
work,
because
when
you
say
that
I'm
abandoning
my
path
number
seven,
you
might
have
also
path
number
11.
V
That
is
really
the
same
path,
and
the
other
side
has
four
and
five,
which
are
the
reciprocal
flows
on
that
same
path,
and
they
have
no
idea
of
knowing
really
that
that's
what
you're
referring
to
at
this
point,
because,
ultimately,
that's
just
the
way
the
protocol
works
so
seems
to
me
like
this.
There's
like
a
bigger
problem
here,.
J
Yeah
no
I
I
think
it's
to
happen.
We
have
to
think
about
that,
but
I
think
it
should
it's
it's
a
good
idea.
We
should
think
about
that.
Yeah.
R
Yeah
I
mean,
let's
remember
why
we
did
this
linkage
between
connection
ID
and
as
ID
the
the
reason.
Why
is
that
we
want
to
have
AED
encryption
and
aad?
Encryption
relies
on
nonsense
and
the
nonce
is
the
sequence
number,
and
so,
if
we
do
that,
we
need
to
make
sure
that
the
same
sequence
number
and
pass
id
never
happens
at
the
same
time.
That
means
that
the
the
receiver
has
to
bring
the
pass
ID
inside
the
encryption
initialization
inside
the.
R
F
O
So
I
was
just
thinking
about
what
you
would
do
if
you
were
doing
some
of
the
cooler
schemes
that
kazuho
had
previously
described
of
things
like
I'd
like
to
send
a
different
connection,
ID
for
every
packet
and,
if
I'm
doing
that,
and
that
requires
bringing
up
and
down
A
New
Path
or
having
each
side
cycle
what
they
think
of
as
the
path.
If
we're
going
to
do
that
on
a
per
packet
basis,
it
seems
not
great
foreign.
A
G
J
G
A
H
Martin's
suggestion
seems
reasonable:
I
was
going
to
suggest
something
different,
but
I
think,
but
you
could
just
use
the
ID
that
initiated
the
path
that
like
initially
created
it,
and
so
then,
even
if
you
change
the
path
ID
on
every
single
packet
like
that,
would
be
stable
over
the
lifetime
of
the
connection.
So
I
don't
know.
I
I
think
there
are
solutions
here
to
fix
this
problem.
I
think
it
is
a
problem.
I
think
you
need
to
do
something,
but
I
think
probably
other
than
the
two
things.
J
A
J
A
Q
A
Think
that
probably
that
will
wrap
it
up,
but
it
sounds
like
there
was
a
lot
of
feedback
on
this,
so
yeah,
okay,.
J
It's
very
quick
just
to
say
that
actually,
given
the
last
updates
yeah,
we
have
to
then
remove
the
single
packet
number
of
space
text,
but
for
the
multiple
pack
in
the
space
it
starts
being
stable
and
we
start
having
some
implementation
supporting
multi-pass,
quick,
and
so
we
should
just
continue
to
finalize
the
open
issue
and
start
doing
some
interrupt
testing
to
assess
whether
a
multi-pass,
quick
is
ready
for
further
steps
and
I
leave
the
floor.
Then
to
right
the
following.
H
A
B
A
A
A
Having
a
slightly
Echo
problems:
okay,
let's
see
if
I
can
do
the.
M
F
H
Yeah,
just
just
keep.
H
Things
happened:
we
removed
ignore
CE,
because
the
use
case
for
it
was
unclear
and
there
were
an
increasingly
numerous
number
of
concerns
about
cases
in
which
it
might
or
might
not
be
dangerous,
and
basically,
at
some
point
we
determined
that
the
the
value
is
low
and
the
risk
was
relatively
high
and
just
ditching.
It
was
the
simplest
thing
to
do
so.
There
was
consents
in
the
room
last
time
to
do
that.
That
has
been
done.
H
I
still
am
working
on
the
pr
to
change,
ignore
reordering
to
reordering
threshold,
which
is
a
slightly
more
complex
change.
I
need
to
go
back
and
update
the
that
PR
based
on
the
feedback
I've
received
I
would
encourage
people
to
look
at
that
PR,
maybe
a
little
bit
later
in
the
week.
So
I
can
at
least
integrate
the
feedback.
I
haven't
integrated
yet
and
then
once
that's
done
in
theory,
it
should
be.
H
H
A
I
myself
am
in
the
queue
not
as
a
chair.
Is
there
any
reason
that
the
reordering
threshold
should
not
just
be
a
variant
rather
than
eight,
because
it's.
A
B
A
M
H
Difficult
to
for
me
to
decide
on
what
maximum
number
to
disallow
and
so
I'm,
not
sure
I
mean
we
could
put
some
text,
as
that
basically
says
as
a
receiver
like.
If
it's
we
already
have
text
that
basically
says
you
should
acknowledge
at
least
one
per
our
CTE
and,
like
some
other
things
as
backstops,
so
I
I,
don't
I,
guess
I,
don't
know
how
to
move
forward
with
a
max
value
is,
is
the
challenge
it's.
A
F
H
I'm
going
to
add
like
say,
we
put
a
Max
of
a
thousand
just
as
throwing
a
number
there.
I,
don't
know
if
that's
right
number,
just
like
at
some
point.
You
know
the
C
win
dominates
and
if
the
congestion
window
of
the
pier
is
less
than
whatever
reordering
value
number,
you
put
the
reording
value.
Basically
that
stops
mattering,
because
the
time
threshold's
in
the
backstops
and
the
draft
start
like
basically
like
fixing
things
and
like
you
know,
those
are
the
things
that
are
actually
operating
the
the
mechanism.
H
So
it's
that's
why
it's
so
difficult
for
me
to
kind
of
Reason
about
like
what
max
value
even
makes
sense,
because
even
a
value
like
100
might
be
so
enormous
that
basically,
like
the
other
things,
start
kicking
in
and
it's
it's
basically
equivalent
to
ignore
order.
Yeah.
T
Three
things
one:
we
do
not
know
what
is
large
and
I
agree
with
that.
So
it's
a
setting
that
large
bdb
we
we
know
from
from
personality
generally
large
bdb
networks
carry
a
lot
of
packets,
a
lot
of
data,
a
lot
of
sequence
numbers
and
we
want
to
allow
for
large
numbers
to
exist.
That's
one!
T
Second,
we
learned
through
the
quick
experience
that,
as
soon
as
we
move
things
to
Warrant,
life
became
easy,
so
whether
we
actually
recommend
a
limit
or
not
having
this
as
a
warrant
makes
sense,
because
it
gives
us
the
flexibility
of
not
changing
the
protocol,
but
changing
recommendations
based
on
experience.
That's
a
huge,
Improvement
and
number
three
again
I.
Don't
think
that
we
understand
what
number
here
makes
sense
as
a
limit,
so
we
should
allow
for
the
experimentation
and
that's
where
I.
H
Would
go
with
it.
There
was
actually
a
reason
why
I
stuck
in
a
bit
number
in
there
originally,
and
that's
because
my
understanding
of
the
TCP
implementation
in
the
Linux
kernel
is
that
it
uses
a
un8
as
the
like
adaptive,
reordering
threshold
and
so
0
to
255.
Is
you
know
it's
the
same
range
that
Linux
but
I
mean
yeah,
but
that's
that's
like
that's
merely
like
copying
one
status
quo
into
another
one.
T
N
Yeah
so
like
I,
don't
think
an
absolute
number
makes
sense
because,
like
the
one
assumption
that
is
written
in
RFC,
is
that,
like
TCP
rack
assumes
that
reordering
happens
within
one
rtt
within
one
window?
Basically,
and
like
this
window
like
doesn't
have
a
max
value
right.
So,
like
we
don't
know
what
will
happen
future
I,
don't
think
we
should
have
a
max
value,
but
maybe
there
might
be
additional
considerations
that
we
want
to
write
down
at
some
point,
but
it
shouldn't
be
a
number
okay.
B
Different
question:
how
near
do
you
think
you
are
to
being
finished
if
we
can
deal
with
these
little
things,
I.
H
M
F
T
F
W
Right,
yay,
q
log
right
next
slide.
Please,
as
you
might
have
noticed,
we
have
a
new
editor
on
the
team.
It's
a
very
mysterious
figure
if
you
don't
actually
know
his
name
yet
we
only
know
him
as
the
quick
hat
wearing
man,
and
he
was
so
gentle
to
us
today
to
be
the
first
meeting
I
think
in
ages
that
he
hasn't
actually
worn
his
quick
hat.
Thank
you
mysterious
man,
but
so
under.
W
No
no
previous
previous,
under
his
guidance,
we
have
made
quite
a
quite
some
progress
this
time,
a
lot
of
it's
editorial
as
well,
which
you
can
nicely
follow
in
the
change
logs
in
our
three
different
q
log
documents,
which
you
can
also
find
on
the
bottom
of
the
slide.
There
are,
however,
a
few
things
that
I
wanted
to
bring
to
the
working
group
today.
Obviously,
next
slide,
please
the
main
one.
An
open
standing
issue
for
a
very
long
time
is
about
security
and
privacy
considerations.
W
W
This
is
a
very
complex
issue
who
knew
it
also
turns
out
there's
relatively
little
practical
guidance
on
this
in
the
ietf,
at
least
not
when
it
comes
to
logging
there
and
then
in
discussion
with
some
people
who
know
a
lot
more
about
this
than
I.
W
Do
that
things
like
anonymization
have
long
been
shown
to
not
be
enough
to
actually
protect
against
a
lot
of
different
risks
in
this
another
important
part
is
that
doing
this
right
inside
of
qlog
would
substantially
delay
the
Kellogg
documents,
even
more
than
they
already
have
been
so
the
new
approach
next
slide,
please,
is
to
do
things
a
bit
more
simply
to
have
only
a
relatively
basic
guidance
within
the
qlog
documents.
That
primarily
says
there
are
risks.
W
These
are
some
of
the
risks,
and
here
are
some
practical
examples
of
where,
in
the
key
lock,
there
might
be
sensitive
data
without
having
to
actually
list
everything
in
there
and
then
also
have
some
tips
or
or
general
ideas
about
how
to
manage
that
risk
without
going
into
a
necessary
detail.
In
parallel
to
that,
however,
it
would
be
good.
I.
Think
I've
talked
a
few
people
about
that
who
seem
to
agree
that
it
would
be
useful
to
have
a
parallel
effort
in
the
ietf
discussing
this
problem.
W
In
more
general
terms,
cross
q,
log
Beyond,
qlog
I,
should
say
how
do
we
give
privacy
guidelines
for
logs?
In
particular,
we
have
some
stuff
on
the
protocols
themselves,
for
example
linked
on
the
slide.
Rc6973
I
find
very
interesting,
but
maybe
we
should
have
something
like
that
for
logs
here
as
well-
maybe
maybe
not
that
is
still
very
much
TBD.
If
you're
interested
in
that,
please
come
talk
to
me.
We
will
see
what
we
can
do
so
that
not,
but
for
the
first
part,
so
the
qlog
specific
part.
W
We
now
have
some
very
specific
text
already
in
the
document
next
slide.
Please
I
think
we
have
some
very
good
guidelines
in
there
at
this
point,
I'm,
particularly
a
fan
of
recommending
to
store
all
the
Q
logs
in
a
blockchain.
This
means
we
can
use
Smart
contracts
to
make
sure
everything
is
privacy,
sensitive
and
secure.
So
I
think
that
part
is
relatively
mature,
but
there
might
be
other
sections
in
the
in
the
privacy
and
security
considerations
that
might
need
a
few
more
eyes
to
to
look
on.
W
W
W
W
And
the
ideal
point
would
be
that
we
have
that
you
could
basically
have
the
basic
documents
and
whatever
extensions
you
have
extract
the
cddl
from
all
of
them,
merge
them
together
into
one
big
pile
and
then
have
that
be
valid
and
strongly
typed
cddl
there.
That
would
be
the
dream
goal.
Right
and
cddl
has
some
extension
capabilities,
some
of
which,
as
you
can
see
on
the
slide,
we
already
use
within
the
main
documents
as
well
to
link
the
main
schema
and
the
event
definitions
there.
W
So
I
took
some
notes
from
some
of
my
fellow
quick
and
H3
enthusiasts
and
defined
a
datagram
extension
document
so
for
the
two
different
datagram
rfcs,
but
this
time
for
q
log
events
for
them.
This
is
a
separate
draft
on
my
personal
repo
I'm,
not
trying
to
get
this
adopted
anywhere
or
anything
like
that.
W
It
was
just
a
way
for
us
to
see
if
there
is
anything
missing
in
q
log
to
make
this
possible
turns
out,
there
is,
or
there
was
we
fixed
some
of
that
in
the
last
update,
as
well,
specifically
the
ability
to
add
new
frames
that
was
missing.
Then
it
turns
also
turns
out.
We
need
to
Define
new
transport
parameters
to
negotiate
this.
We
need
a
new
H3
setting
to
be
able
to
enable
this,
and
we
also
want
this
in
q.
Log
seems
simple
enough.
Just
add
those
to
the
qlog
specs
as
well.
W
That's
fine,
but
then
the
question
is:
where
does
that
end
right?
Because
there
might
be
other
ways
that
you
want
to
extend,
you
want
to
add
one
New
Field
to
an
existing
event.
For
example,
if
you
want
to
do
that
for
all
of
the
different
q
log
events,
well,
then
everything
needs
to
become
an
extension
point
right
and
it's
definitely
an
option.
Is
it
the
best
option?
I'm
not
sure
so
next
slide
I
try
to
look
at
how
other
things
within
this
atmosphere
are
doing.
W
It
I'd
hope
to
find
some
inspiration
in
the
multi-part
draft
and
I.
Maybe
I
did
maybe
I
didn't
I,
don't
really
know
so
what
they
basically
do
is
redefine
the
egg
frame
right
they
copy
over
whatever
was
in
there
I
think
and
then
add
some
of
the
stuff
that
they
want,
which
is
a
valid
approach,
but
you
can
see
how
that
could
get
conflicts
with
other
future
extensions
that
try
to
do
the
same
thing.
So
do
we
want
to
do
the
same
approach
for
a
q
log
or
not?
W
Is
it
a
pipe
dream
to
have
everything
in
cdtl
and
to
be
able
to
just
extract
the
cddl
and
everything
match
nicely
or
will
you
expect
people
to
do
some
manual
labor
for
these
things
as
well?
It's
somewhat
unclear
to
me
at
this
point.
We
would
say
people
who
have
an
interest
in
this
or
experience
with
this,
especially
let
us
know
what
a
generally
good
approach
for
that
is.
W
W
It's
so
fun,
I
don't
have
too
much
more
anyway.
Next
slide.
The
other
big
thing
is
qpac
kupac.
Currently
we
basically
just
have
like
a
wire
image
copied
into
Q
lock,
which
is
somewhat
useful
if
you're
doing
low
level
debugging,
but
we
would
like
some
higher
level
events
in
there.
M
W
Well,
that
are
actually
interpretable
by
people
who
don't
know
the
innards
of
qpac,
including
me,
so
very
practically
I
would
like
to
have
a
simple
head
of
line
blocked
event
or
something
similar
like
that.
Not
necessarily
this
that
just
tells
you
okay.
This
is
where
qpark
was
having
problems
on
this
stream,
because
things
were
delayed.
That
would
help
a
lot.
We
don't
have
that
currently
and
we
are
slowly
starting
work
towards
that.
W
So
people
who
have
experience
with
qpac
or
maybe
hpac,
even
that
have
experienced
in
practically
debugging
this
and
knowing
what
would
be
useful
in
terms
of
high
level
events.
Please
let
us
know,
and
then
the
second
thing
about
qpac
next
slide.
Please
is
that
currently
Q
back
and
H3
are
in
the
same
q
log
document,
which
kind
of
makes
sense
because
it
kind
of
belonged
together,
but
they
are
separate
rfcs
and
it
would
make
a
lot
more
sense,
editorially
for
us
to
split
them
up
as
well.
W
So
I
kind
of
really
wanted
to
have
like
a
bike,
shareable
issue
here
today
as
well.
So
this
is
it
we
can
discuss
this
for
half
an
hour
down
there.
So
that's
that's
one
of
the
things.
If
we
don't
get
particularly
strong
opposition,
we'll
probably
just
separate
them
out,
we
will
then
go
from
three
to
four
main
key
log
documents.
At
this
point,
final,
slides
or
almost
final
slide.
W
We
have
some
remaining
design
issues,
open,
I,
think
for
most
of
them
we
have
a
clear
idea
of
what
we
want
to
do
with
them
and
we
plan
to
do
it
by
next
update
there
as
well.
One
of
the
main
Exceptions
there
is
is
a
multi-part
discussion,
so
I
don't
think
we
want
to
go
full
out
on
multi-bomb
in
the
base
documents
yet
or
any
time.
W
That
would
also
potentially
be
useful
for
connection
migration
within
non-multipot
logging
there
as
well,
but
really
we
and
the
editors
currently
have
almost
no
experience
with
either
connection
migration
or
multiple,
especially
not
debugging,
these
things.
So
people
who
have
experience
with
that
I'm.
Looking
in
that
general
direction,
Gentlemen,
please
let
us
know
what
would
be
a
good
approach
that
you
think
final
slide
to
give
people
a
little
bit
more
incentive
to
collaborate
to
speak
up.
W
I've
actually
brought
some
very
delicious
Belgian
chocolates,
so
everyone
who
comes
to
the
mic
or
who
answers
on
an
issue
or
helps
us
this
week
and
come
to
me
and
get
some
it's
a
Belgian
chocolate
I
originally
wanted
to
do
this
about
a
year
and
a
half
ago.
W
No
I
originally
wanted
to
do
this
a
year
and
a
half
ago
to
thank
you
all
for
helping
me
get
my
PhD
with
all
the
coolock
stuff
covet
happened.
Obviously
this
first
time
I
see
you
all.
So
basically,
anyone
who
thinks
they've
helped
me
with
with
all
this
stuff
throughout
the
years,
which
is
probably
most
of
the
people
in
this
room.
Thank
you.
So
most
of
you
are
also
eligible
for
for
chocolate.
So
come
find
me
afterwards.
Thank
you.
A
Without
married,
you
want
to
say
your
pieces.
W
At
this
point,
no
we're
making
kind
of
the
approach
that
everything
that
is
not
in
the
chord
our
seas
will
be
an
extra
q
log
document
as
well.
N
S
N
W
We've
discussed
it
as
length,
and
there
are
several
examples
historically
within
the
itfs
that
that
has
been
done
for
similar
things
apparently
and
that's
also
I
think
the
preferred
approach
that
Lucas
was
suggesting
to
just
have
indeed
small
incremental
qlog
rfcs
for
small
incremental
new
events.
But
we
can
discuss
that.
But
that's
the
current
way
we're
thinking
about
it.
I
Yeah
just
to
step
in
here
what
what
we're
trying
to
avoid
so
the
q
log
editors
have
been
working
hard
and
appreciate
their
input,
but
we're
kind
of
stalling,
because
there's
a
huge
backlog
of
issues
of
things
and
as
we're
trying
to
figure
out
our
own
decisions
related
to
things
like
cddl
Etc.
Meanwhile,
the
quick
working
group
and
everybody
else
using
quick
is
marching
on
and
adding
all
of
these
other
things.
So
if
we
try
and
incorporate
everything,
that's
ever
been
defined
like
it's.
I
It's
it's
just
gonna
stretch
out
our
timelines,
so
it
becomes
what
is
a
good
demarcation
to
draw.
We,
we
started
off
with
this
being
the
the
schema
for
quick
events
and
HTTP
3
and
datagram
didn't
really
exist
like
in
its
final
form.
At
that
stage
we
could
just
sneak
it
in,
but
then,
when
we
look
back
in
in
time,
we
would
need
to
justify
why
we
did
it
for
datagram
and
nothing
else.
I
If
you're
implementing
datagram
there's
a
strong
possibility,
you're
trying
to
use
it
for
stuff
like
mask
or
or
web
transport,
in
which
case
you
also
want
the
hb3
event,
I,
wouldn't
suggest
you
need
two
rfcs
for
that.
But
if
you're
going
to
create
a
document
to
to
for
events
related
to
datagrams-
and
it's
not
just
the
wire
format
here,
it's
other
events
that
you
might
want
to
do,
such
as
I
drop
my
datagram
on
the
floor,
because
you
know
I'm
I
have
a
Qing
buffering
model
and
it
overflowed
locally.
I
We
can
model
non
wire
things
here,
so
yes,
there's
process
and
overhead
to
every
RFC,
but
yeah.
We
we
could
encourage
people
to
try
and
Define
q
log
as
part
of
the
new
extensions
that
they're
doing
I.
Don't
quite
know,
I'm
really
happy
to
hear
input
here,
but
it
doesn't
seem
right
that
we're
going
to
try
and
lump
everything
into
the
drafts
that
we
have
adopted
into
this
working
group.
That
seems
like
it's
just
going
to
stretch
everything
out
to
me.
N
Yeah-
and
that
was
that
would
definitely
my
not
my
proposal,
but
I
think
we
also
don't
need
to
really
be
religious
about
this
right.
So,
like
I,
think
datagram
is
a
proposal
or
is
like
an
extension
that
is
actually
widely
deployed.
Most
decks,
have
it
in
there
and
like,
and
it's
really
small
extension
also
and
then
having
an
rxc
is
a
lot
of
overhead
I
think
we
can
just
like
take
a
practical
approach
and
let's
go
go
as
we
think
it
makes
kind
of
sense.
N
I,
don't
think
we
have
to
have
strict
rules
about
this,
because
you
can
always
write
a
new
RFC
that
just
adds
additional
features.
So
I
think
that
we
should
just
take
the
stuff
that
we
think
is
important
right
now,
based
on
a
kind
of
a
deployment
level,
and
it's
like
it's
actually
not
important
to
get
it
like
absolutely
right.
But
it's
it's
important
to
limit
the
overhead
here.
I
think.
N
J
J
Just
just
the
first
clarification
question
in
qlog:
is
there
a
way
to
indicate
which
connection
ID
you
are
using
in
a
packet
or
not
so,
let's
say
I
send
a
packet.
Do
you
have
information
saying
this?
It
was
this
connection
ID.
Yes,.
X
Yeah
Jonathan
Alex
I
was
going
to
say
I.
My
inclination
would
be
to
say
that
the
Q
logs
effect
should
cover
all
all
extensions
published
before
it
goes
to
publication.
The
Publications
afterwards
should
Define.
It
themselves,
basically
have
a
section
in
the
draft
saying
how
to
use
this.
How
to
describe
this
extension
in
q
log
I
mean
I
guess
the
question
is:
is
it
is
writing
that
section?
Is
it
more
important
to
have
a
deep
understanding
of
too
long
or
more
important
to
have
a
deep
understanding
of
your
extension?
S
X
I
would
say
just
have
to
recommend
that
all
future
quick
extensions
should
you
know
for
whatever,
should
means
for
writing
a
document
how
to
define
how
they're
used
with
q,
log,
okay,.
A
G
Martin
Duke
Google,
plus
one
on
like
encouraging
future
draft
authors
to
have
a
little
like
you
log
section
if
it's
applicable,
Ed,
actually
I
agree
with
neria
more
than
I
I
would
say
that
I
think
the
principle
of
of
not
trying
to
keep
up
with
everything
happening
in
the
working
group
is
sound
because
you
will
never
finish
otherwise,
but
it
is
I
also
think
it's
not
necessary
to
be
religious
about
this,
so
I,
don't
I,
don't
particularly
have
a
dog
at
the
idea
going
to
fight
in
particular,
but
if
I
wouldn't
I
would
encourage
you
to
be
willing
to
to
take
that
in.
G
If
you
think
it's
important
and
if
it's
not
and
if
not
like
cut
it
loose,
and
we
could
do
a
follow-up
draft
to
roll
up
other
things
or
not,
everything
needs
to
be
q-logged
potentially.
If
nobody
cares
enough
to
write
it
up.
So
I
think
that's!
That's!
Okay
as
well.
Let's
not
be
religious
about
what
goes
in
this
versus
a
future
draft
versus
never
being
documented
at
all,
but
yeah.
Let's
finish
so:
let's
not
try
to
race
to
publication
to
to
the
include
everything
of
the
publication.
A
E
David
schenazi
chocolate,
Enthusiast,
so
I'm
gonna
agree
and
disagree
with
Martin
at
the
same
time.
So
first
off
I
think
Lucas
you're
underestimating
the
cost
of
publishing
an
RFC,
not
only
in
terms
of
the
author's
time,
but
also
our
wonderful,
lovely
ads
having
to
click
buttons
on
every
one
of
them
and
the.
E
High
amount
of
money
going
to
the
RFC
editor
for
per
per
page,
so
I
would
strongly
advise
to
reduce
the
number
of
rfcs.
Like
save
everyone.
A
lot
of
time,
then,
on
that
note
agree,
look,
let's
not
keep
this
open
forever,
because
there
was
always
something
more
just
pick
a
date
like
say
today
and
say:
okay,
everything
that's
published
today,
like
datagram,
we
put
it
in
everything,
that's
not
published.
Today
we
don't
put
it
in
done
and
that
and
then
for
future
things
they
can
decide
as
needed.
We
don't
need
to
mandate
this.
E
R
We
almost
had
consensus
on
the
shot
that
what
we
want
is
something
like
Publishers
draft
now
I
have
a
Wiki
in
which
we
list
all
the
updates
that
come
in
and
all
the
experiments
that
people
are
doing
and
have
something
like
a
yearly
update.
That
summarize
everything
that
has
been
done
since
there,
since
that
here,
that
seems
like
the
right
compromise,
I
know,
I,
know
Bobby
and
it's
bad
for
you,
because
I
mean
if
you
get
to
publish
one
of
C
for
each
of
that
to
q
log.
R
Specifically
on
the
point
of
pass
ID,
the
reason
I
I
raised,
that
is
that
when
I
was
doing
the
early
experiment
with
multi,
multiple,
multiple
sequence,
number
and
sequence
number
purpose,
then
the
visualization
using
the
existing
tool
was
completely
weird.
I
mean
you
would
have.
I
was
going
in
every
which
direction
or
the
tools
will
be
Falcon
and
all
that
so
I
think
that
in
the
case
of
multi-pass,
we
probably
need
to
have
some
kind
of
experimentation.
F
R
N
W
N
Kind
of
want
to
second
that
point,
like
we've,
been
looking
into
that
as
well,
because
we're
implementing
multi-pass
and
you
need
some
logging
and
whatever
we
can
see.
What's
the
best
way
to
extend
it,
because
you
always
need
to
pass
identifier
with
like
most
of
the
events
that
you
have
in
there
and
so
on
so
like
there
are
different
ways
to
do
that.
That
would
be
nice,
nice
exercise
to
actually
get
that
first,
like
get
one
solution
that
makes
sense
before
we
wrap
this
up
completely
for
multi-path
foreign.
G
No
I'll
I'll
wait,
I,
won't
subject
you
to
that.
This
I
think
would
be
pretty
quick.
Not
much
has
happened.
We
did
get
a
a
another
crypto
review
of
the
four
passage
Rhythm,
which
was
the
only
contentious
bit
of
the
crypto,
and
we
did
do
a
little
tweak.
Based
on
that,
you
can
see
the
poll
it's
it's
already
committed,
but
if
you
have
a
problem
with
it
Mike
you
know
we
can
always
change
it.
G
G
So
when
last
we
met
the
conclusion
was
we
could
just
sort
of
sit
on
this
until
somebody
had
actually
deployed
it,
which
is
on
what
I'm
working
on
like
I
would
say
that,
possibly
by
Yokohama,
certainly
by
San
Francisco,
we
should
have
that
done
and
they
should
be
out
in
out
in
the
world
and
I.
Don't
know.
Is
there
anything?
The
working
group
is
looking
for
specifically
as
a
report
or
just
the
fact
that
we
did
it
and
it
is
possible
to
do.
A
H
G
So
there's
a
non-norative
section
in
the
in
the
draft
now
that
has
been
growing
over
the
last
couple
of
versions
and
it's
about
sort
of
the
multi-threaded
problem.
Like
you
know,
most
of
us
operate
servers
with
multiple
threads
and
and,
as
you
change
connection,
it
has
to
be
routed
somehow
and
depending
on
your
architecture,
you
may
not
be
able
so
okay,
so
there's
a
number
of
solutions
to
this,
and
the
section
is
on
normative,
because
this
is
not
really
a
server
load.
G
Balancer
thing:
this
is
a
kind
of
server
talking
to
itself
thing.
One
solution
is
for
just
to
treat
it
like
a
layer
for
a
load
balancer
and
have
whatever
is
doing
the
demultiplexing.
The
crypto
connection
ID
have
some
part
of
the
server
ID
Define.
What
thread's
going
to
that's
completely
viable
thing
that
will
work
in
some
architectures
in
in
in
certain
some
architectures,
like,
for
instance,
Envoy,
which
is
near
near
to
our
hearts,
uses
BPF
and
you're
not
going
to
do
as
ECB
and
BPF
unless
you're
much
cleverer
than
me.
G
So
there's
some
other
Alternatives
and
it
just
talks
through
those
one
is
for
again
since
you're
all
kind
of
on
the
same
device.
The
the
thread
can
actually
sort
of
register
with
the
multiplexer.
Here's
a
connection
ID
here
all
the
connection
IDs
that
route
to
me
so
just
have
a
big
table.
G
We
have
a
a
fairly
interesting
solution:
I
think
that
is
kind
of
matches.
The
pattern
of
quick
header
encryption
where,
basically,
you
have
a
big
pile
of
ciphertext,
so
you
apply
like
a
keyed
hash
to
that
ciphertext,
which
generates
a
bit
mask,
which
then
apply
to
a
a
thread
encoding.
That's
that's
somewhere
in
in
the
sort
of
the
the
non-separate
text.
Part
of
the
connection,
ID,
certainly
in
comments
any
brainstorms
other
ways
to
accomplish
this
problem
again.
G
I,
don't
think
it
needs
to
be
standardized
because
the
server
implementation
issue,
but
it
would
be
nice
for
that
to
be
a
clear,
editorially
and
be,
you
know,
have
as
many
good
ideas
in
it
as
possible.
So
I
invite
you
to
take
a
look
at
that
other
questions
or
comments.
G
A
You
thank
you
Martin,
so
I
think
now
we
are
moving
on
to
our
as
time
permits
and
we
have
Martin
Seaman
to
talk
about
the
reliable
stream
reset
draft.
Q
Next
slide,
please
so
what
happens
in
quick
when
you
reset
a
stream,
you
you
send,
you
send
your
data
in
in
stream
frames,
and
then
you
send
a
reset
stream
frame
containing
that
yeah.
Q
You
go
a
bit
better,
okay,
so
and
quick.
You
send
this
the
stream
data
in
stream
frames
and
then,
when
you
want
to
reset
the
the
stream
you
sent
a
reset
stream
frame
that
contains
the
the
idea
of
that
stream.
So
in
in
the
in
the
best
case,
all
all
frames
are
received
by
the
receiver,
but
packets
get
lost
and
stream
frames
may
not
be
be
transmitted
to
the
receiver.
Q
The
receiver
knows
that
stream
frames
will
not
be
delivered
delivered
anymore,
so
it
typically
won't
wait
for
stream
data
to
arrive,
but
as
soon
as
it
receives
the
reset
stream
frame,
it
will
tell
the
application
here
or
the
stream.
Is
that
the
the
peer
reset
it
next
slide.
Q
So
this
this
can
lead
lead
to
to
certain
problems
if,
depending
on
how
you
use
the
Stream
So,
a
very
common
pattern
in
in
applications
is
to
open
a
new
stream
and
then
to
to
put
some
kind
of
stream
identifier
in
the
beginning
like.
Q
So
this
is
not
the
quick
stream
ID.
This
is
in
like
an
application
layer,
identifier
that
you
put
in
the
Stream,
for
example,
in
web
transport.
You
can
have
multiple
web
transport
sessions
on
the
same
quick
connection,
so
the
first
thing
you
send
on
a
new
on
a
new
stream
is,
is
basically
it's
it's
an
HTTP
frame
that
contains
a
session
ID
and
that
allows
the
receiver
to
then
say
like.
Oh,
this
stream
belongs
to
the
the
first
web
transport
session,
and
this
stream
belongs
to
the
second
web
transport
session.
Q
There
are
other
other
use
cases
where
you
can
also
have
some
some
kind
of
application
layer
message
that
might
have
like
a
header
and
a
body
part
and
yeah.
So
so
this
is.
This
is
a
very
common
pattern
to
to
build
to
build
applications
on
top
of
quick
next
slide.
Q
And-
and
this
is
this-
is
all
in
Flight-
both
the
the
identifier
and
the
stream
data
are
lost
and
both
of
them
are
not
retransmitted.
Q
So
the
the
receiver
of
the
stream
might
end
up
in
the
situation
where
it
receives
a
reset
stream
frame,
but
no
stream
frames
for
that
for
that
stream
at
all,
and
it
won't
know
what
the
identifier
of
that
stream
was
next
slide.
So
to
solve
this
problem.
Q
I've
I've
written
up
in
a
draft
that
introduces
a
new
frame
at
the
at
the
quick
layer,
a
so-called
reliable
reset
stream
frame,
and
this
is
just
a
reset
stream
frame
with
one
additional
field.
With
the.
Q
A
reliable
size
which
is
a
a
warrant,
so
what
exactly?
What
exactly
does
it
do
next
slide?
Q
Q
We
have
the
final
sizes
in
the
in
the
reset
stream
frame,
but
we
have
this
orange
part,
and
this
is
the
the
reliable
part
of
the
Stream
and
as
a
sender,
of
the
reliable
reset
stream
frame,
I
guarantee
that
I
will
re-transmit
and
re-transmit
all
the
data
up
to
the
reliable
size
after
that,
no
guarantees,
as
as
the
receiver
of
of
a
reliable
reset
stream
frame,
I,
can
then
know
that
I
can
can
deliver
all
these
bytes
or
other
bytes
up
to
Reliable
size
to
to
the
application
and
I
can
then
surface
the
error.
Q
So
I'd
like
to
find
out
this
is
the
the
reset
stream
frame
is
just
a
digital,
degenerate,
reliable,
reset
stream
frame
where
the
reliable
size
is
exactly
zero
next
slide.
Q
So,
as
I
said
at
the
center
of
the
frame,
you
guarantee
that
you
guarantee
that
you
retransmit
everything
after
the
up
to
the
reliable
size
and
as
a
receiver
of
this
Frame
you're
guaranteed
to
your
application
that
you
deliver
deliver
that
many
bytes,
and
only
after
that
surface
the
the
error
from
the
stream
reset
next
slide.
Q
Q
It
is
not
as
scary
as
it
sounds,
so
you
can
see
it
was
about
400
400
lines
of
code
added
to
my
implementation
of
which
of
which
300
were
tests
at
just
100
actual
code,
so
it
is
doable
next
slide
yeah.
That
brings
me
to
the
end
of
my
presentation.
A
As
someone
who
has
written
some
of
the
code
that
introduced
bugs
and
then
subsequently
had
to
fix
them
around
resets
of
quick
streams,
I
would
rather
not
complicate
it
more
than
it
already
is,
and
114
lines
is
114
lines,
more
opportunities
for
bugs
that
are
potentially
catastrophic
and
so
I
was
wondering.
A
I
I
said
this
made
this
suggestion
on
the
list,
which
was
that,
instead
of
having
this,
what
if
we
had
a
reset
that
had
semantic
application
data
in
it,
which
is
to
say,
hey,
I'm,
resetting
this
stream-
and
this
is
some
application
relevant
information
such
as
a
stream
identifier
that
can
be
communicated
to
the
app
layer
and
then
it
will
know
what
kind
of
stream
it
is
identifying
is
being
reset.
And
then
you
would
have
to
not
change
anything
about
the
quick
stream
State
machine,
okay,
China.
T
Thanks
Matt,
thanks
for
the
presentation,
Martin
I
I
think
that
the
the
semantics
of
what
stream
and
stream
reset
are
have
been
pretty
clear
right
in
the
sense
that
the
stream
is
basically
you
think
of
it
as
an
object.
It's
semantic
that
you
use
for
something
like
that
and
then
at
some
point,
when
you
want
to
actually
reset
the
stream,
you
don't
have
to
reset
the
stream.
T
You
could
send
a
fin
and
you
could
actually,
you
know,
fully
reliably
deliver
the
entire
stream
content
before
you
discard
any
other
applications
say
that
you
want
to,
and
this
allows
for
a
certain
degree
of
Freedom
which
I
don't
I.
I
have
to
say
that
the
complexity
of
doing
the
doing
this
work,
not
just
in
terms
of
the
protocol
implementation,
but
in
terms
of
you
know,
thinking
about
and
and
in
some
ways,
there's
a
there's,
a
purity
to
the
stream.
T
T
I'm
not
sure
that
I'm
convinced
that
there's
a
lot
of
value
for
the
for
the
for
the
company,
for
the
complexity
that
this
introduces
so
I'm,
not
convinced
about
the
use
cases.
That's
what
I'd
say
and
I'm
not
convinced
that
the
users
cannot
be
solved
in
other,
simpler
ways.
Q
I'm
I'm
I'm
I
would
be
happy
to
discuss
other
Solutions,
because
this
is
a
problem
that
comes
up
in
web
transfer.
That's
a
problem
that
comes
up
in
in
other
applications
that
are
built
on
top
of
quick
and
it
seems
like
they
all
run
into
the
same
problem,
and
they
would
all
need
to
build.
If,
if
we
don't
have
this
or
something
like
this,
they
all
need
to
build
something
at
the
application
layer
to
solve
this
problem,
which
seems
like
fundamentally,
the
wrong
layer
to.
Q
My
partner,
maybe
if
I,
can,
if
I,
can
comment
on
your
point.
You
can
just
Finn
the
stream
and
have
it
delivered
reliably.
I,
don't
I,
don't
think
this.
This
works
in
in
most
cases,
so
so,
first
of
all
like
if
there's
a
application
level
level
semantics
about
thinning
and
and
resetting
like
if
I
reset
I
know
that
the
thing
I
sent
is
not
is
not
complete.
If
I
thin,
it
then
I'm
telling
the
Peer
Net
what
I
sent
is
complete.
Q
T
I
Yeah,
so
just
as
a
chair
interpreter
show
on
time,
so
so,
if
you're
in
the
queue
come
up,
but
please
clip
it,
quick
I
need
chocolate.
Yes,
that's
me!
Oh
yes,
I'll
keep
it
quick,
just
just
as
we
we
talked
through
this
I've
seen
the
proposal
on
the
list,
but
I'm
just
like
I,
think
H3
kind
of
has
these
problems
already,
because
there's
a
lot
of
rules
around
you
must
create
a
Control
stream,
but
you
must
not
close
one
but
there's
probably
some
race
condition
that
could
happen.
I
S
Mike
Bishop,
HTTP,
3,
editor
and
I
can
comment
on
that.
Actually,
one
of
the
changes
that
we
took
really
exceptionally
late
in
the
process
to
the
H3
spec
was
on
some
of
the
United
unidirectional
streams,
saying
hey
as
a
receiver.
It
would
be
a
really
bad
idea
to
reset
the
stream
until
you've
read
the
first.
However
many
bytes
to
know
what
the
stream
ID
is
or
the
push
ID
in
that
case
I
think-
and
this
is
basically
solving
the
same
problem
from
the
send
side
and
yeah.
It's
a
problem
that
exists.
A
E
Skanazi
web
transport,
Enthusiast
and
chair,
but
speaking
in
my
personal
capacity
here,
I,
haven't
really
seen
this
problem
in
practice
for
the
like
H3
side
of
things,
because,
like
I
I
get
how
this
can
happen
in
theory
in
practice
like.
Why
does
your
implementation
go
around
resetting
streams
willy-nilly?
Like
doctor,
it
hurts
when
I
poke
myself
in
the
eye.
Don't
do
that?
E
E
So
that
happens
because
of
the
separation
we
have
of
entities
between
the
like
JavaScript
and
the
browser,
and
the
quick
stack
I
see
the
solution
like
that's,
been
proposing
web
transport
as
using
a
capsule
as
being
much
simpler
and
like
I
as
chair,
one
web
transfer
to
get
done
as
quickly
as
possible.
So
let's
not
do
something
too
complicated.
E
I
was
initially
saying
like
like
really,
let's
not
do
this.
Let's
instead
just
saw
something
at
web
transport.
I
would
be
maybe
open
to
something
that
I
think
Jonna
proposed,
which
was
a
much
simpler
version
of
this,
which
is
like
yeah.
In
that
reliable
reset
frame,
you
can
put
the
start
of
the
stream
and
that
definitely
reduces
the
amount
you
can
send.
But
in
practice
we're
only
ever
going
to
want
to
send
like
two
of
our
ends
tops.
I
Q
So
so
maybe
to
the
to
the
to
the
web
transport
capsule
I,
don't
agree
that
this
is
the
simpler
solution.
I
have
slides
for
that
for
the
for
the
web
transport
session.
Q
Creates
a
lot
of
complexity.
If
you
really
think
about
the
corner
cases,
I
think
the
web
transport
working
group
would
be
the
more
appropriate
place
to
talk
about
those.
So.
E
If,
if
I
may
propose
being
now
as
web
transport
chair
we're
going
to
discuss
this
in
web
transport,
so
if
people
care
about
this
topic,
please
come
to
the
meeting
I'm
sharing
that
I,
don't
remember
when
it
is
and
we'll
discuss
then
and
then
based
on
what
the
consensus
in
the
room
seems
there
I
think
that'll
be
useful
input
here,
because
if,
if
web
transport
says
all
right,
we're
going
to
ask
quick
to
fix
this,
it's
a
different
story
that
we're
fixing
it
in-house.
So
please
come
to
web
transport.
F
A
And
with
that,
we
have
one
more
as
time
permits
it's
about
effect
in
quick
and
thank
you.
Martin.
Y
So
hello,
everyone
I,
am
Franco
Michelle,
I'm
I'm,
doing
a
PhD
at
UC,
luva
and
I'll
discuss
the
FEC
extension
for
quick
next
slide.
Y
So,
although
there
are
a
lot
of
reliability
mechanisms
in
the
lower
layers,
we
can
see
still
see
a
lot
of
losses
at
the
transport
layer,
so
the
graph
at
the
left
is
a
cumulative
distributed
function,
I
plotted
from
measurement
lab
speed
tests
about
the
loss
rates
that
people
see
when
they
do
speed
tests
and
they
are
still
quite
High
loss
rates
on
the
right.
Y
The
ID
is
simple:
if
you
want
to
to
send
three
data,
for
instance,
for
instance,
you
can
send
a
redundant
packet
so
that,
if
you
receive
at
least
two
data
and
the
Redundant
packet,
you
can
reconstruct
the
loss
data.
Next
slide,
so
there
was
already
a
lot
of
research
that
has
been
done
with
quick
and
FEC.
We
did
some
of
them,
but
most
of
them
were
only
focusing
on
emulated
and
simulated
losses
except
the
the
original
quick
paper.
Y
But
the
original
quick
paper
used
Axor
error
correcting
codes,
so
the
idea
is
that
it
was
not
able
to
handle
correlated
losses,
AKA
lost
bursts.
So
after
all
these
works
with
research,
we
we
started
to
think
that
it
might
be
the
time
to
reconsider
the
use
of
efficient,
quick
with
real
networks.
Basically
next
next
slide,
so
we
did
some
experiments
recently
re-implemented
an
FCC
extension
on
the
on
an
efficient
implementation,
close
flash
cache
implementation.
Y
We
did
some
experiments
with
HTTP,
download
and
uploads,
and
we
saw
that
with
that
table,
for
we
did
downloads
of
50
kilobytes
files
and
in
global
we
saw
that
there
is
no
harm
when
there
is
no
loss,
but
when,
when
there
are
losses
during
the
download,
we
saw
that
we
could
improve
the
download
completion
time.
Quite
importantly,
especially
especially
when
there
are
losses
in
the
end
of
the
download
There's,
a
question
yep.
E
E
Y
Y
F
Multi-Path,
quick
and
FEC
for
quick
were
two
things
that
were
in
the
original
charter
that
we
did
not
do.
While
we
were
working
on
the
core
quick
specification,
because
we
wanted
to
finish
the
core
quick
specification,
so
I'm
really
glad
to
see
this
resurface
and
I
would
appreciate
it.
If
people
of
the
working
group
could
tell
you
if
they
think
this
is
a
bad
idea,
because
it
seemed
obvious
to
a
bunch
of
people
early
on
that
it
would
be
useful.
Y
Think
so,
yeah
yeah.
So
basically
we
designed
the
draft
explaining
a
lot
of
stuff.
We
are
able
to
negotiate
the
regarding
code
and
stuff.
We
must
select
carefully
the
arrow
correcting
code
next
slide
because
there
is
the
rpr
stuff.
Y
The
most
well-known
codes
are
under
patents
that
are
still
active
and
there
are
all
codes
that
work
well,
but
that
are
not
under
patents,
but
they
are
a
bit
less
efficient
next
slide.
But
finally,
we
searched
a
bit
and
we
thought
a
good
candidate
which
is
called
Tetris
that
was
developed
in
the
networking
Resource
Group
aka,
the
French
Mafia.
But
basically
these
are
fountains
codes.
There
is
no
IPR
disclosure
and
it's
described
as
patent
free
next
slide.
Y
So
if
you
want
to
help
us
and
take
part
of
in
this
process,
don't
hesitate
to
discuss
the
draft
in
the
mailing
list
to
if
you
want
to
use
stitches,
we
have
an
implementation,
it's
not
open
source
yet,
but
the
idea
is
to
have
it
open
source
and
free
totally.
So
if
you
want
to
use
it,
send
us
an
email,
we
can
send
it
in
into
the
repo.
So
thank
you
Janna.
T
Thank
you
for
the
presentation.
We've
FEC
and
quick
have
a
long
and
colored
history,
so
to
speak,
and
I'm
not
sure
about
the
future
holds
either.
But
what
I'll
say
is
that
there's
the
particular
mechanism
that
you're
talking
about,
which
is
to
do
a
an
FEC
frame
or
packet
at
the
end
of
the
transfer?
When
you
say
the
end
of
the
transfer,
do
you
mean,
in
sort
of
a
coincidence
time
when
you
have
nothing
to
send
and
you're
waiting
for
the
next
request
to
be
received?
Y
T
So
that's
that's!
That's
cool!
That
is
another
technique
or
another
thing
that
you
could
try
there
and
I
would
recommend
that
you
try
it
as
well
as
a
reference
is
not
to
FEC,
but
just
simply
proactively.
T
Do
some
retransmissions,
because
at
that
point
in
time,
typically
you'll
have
only
one
window
of
packets
or
a
partial
window
of
packets
outstanding
on
the
network,
and
that
is
all
that
matters
right,
because
everything
before
that
is
typically
acknowledged
and
has
been
received
by
the
receiver
and
at
this
point
you're
only
looking
at
a
partial
window
of
data
that
you
need
to
send.
You
could
just
as
well
retransmit
some
of
those.
If
you
have
space
in
the
congestion
window
to
send
more,
instead
of
just
sending
one
FEC
packet,
you
can
send
out
a
few
retransmissions.
T
Y
A
R
Yeah
I
mean
just
to
say
that
it's
a
it's
a
great
idea:
I
have
done
a
lot
of
simulation
of
the
use
of
FEC
after
the
download
in
a
satellite
link.
I
was
not
using
a
PC
which
was
actually
repeating
all
the
last
window
of
packets,
which
is
a
performance
implementation.
It
does
help
on
a
number
of
use
case
and
I.
Think
it's
it's
good
work,
and
we
should
encourage
that.
R
It's
clear
that
having
something
there
is
good.
What
I
would
like
to
see
are
publish
statistics
on
the
Lost
patterns
of
the
links
that
you
are
using
as
in.
What's
the
mean
time
between
arrows?
What's
the
the
how
many
packets
are
affected
by
your
stream
by
errors
and
things
like
that,
because
weather
one
of
the
reason
that
the
classic
feces
don't
work
well
is
because
we
have
enough
bursts
and
I
would
like
this
kind
of
thing
to
be
documented.
Somehow.
K
Yeah
I
just
I
agree
with
Christian.
We
are
all
very
interested
in
the
last
interval
between
packets
or
if
the
loss
of
package
is
worthy
or
is
more
random,
because
we
also
tried
FEC,
and
one
thing
we
figured
out
is
that
for
many
link
layer
technology
they
already
have
a
free
demo,
hybrid
arq.
They
do
a
transmission.
So
when
you
end
up
happening,
is
what
you
when
you,
when
you
do
the
observation
and
you
find
out?
K
Okay,
if
you
have
lost
the
loss,
is
mostly
like
a
bursting
loss,
but
the
FEC
is
not
a
very
effective
in
fighting
against
the
Percy
laws
and
if
you
want
to
fight
against
the
person
loss
you
can
do.
Is
you
increase
your
coding
lens,
but
that
also
increase
your
decoding
complexity.
A
V
Yeah,
can
you
speak
a
little
bit
about
why
you
chose
to
protect
frames
from
from
our
perspective,
application
data
is
the
most
important
thing
to
protect
here
and
the
previous
efforts.
I
know
Ian's
just
left,
but
he
was
protecting
the
stream
contents
rather
than
the
than
the
frames
himself,
because
when
we
generate
a
new
packet
for
the
same
stream
data,
we
don't
necessarily
put
it
in
the
same
frame
as
last
time
and
while
some
implementations
do
effectively
just
copy
frames
most
don't
so.
That
seemed
to
me
like
a
pretty
major
hurdle.
Y
B
Yeah
thanks
for
thanks
for
your
talk,
it
looks
like
the
path
you
chose
had
a
quite
a
lot
of
persistent
loss.
B
B
I
I
We're
gonna
go
and
consume
some
chocolates.
I
I
do
have
some
stickers
at
home
somewhere
in
a
box.
I
do
not
know
where
they
are
come
and
find
me
during
the
week
if
I'll
find
them,
and
you
can
have
some
sure.
Thank
you.