►
From YouTube: IETF112-QUIC-20211110-1200
Description
QUIC meeting session at IETF112
2021/11/10 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
While
we're
waiting,
would
anyone
like
to
volunteer
to
be
a
jabber
scribe.
A
And,
or
would
anyone
like
to
volunteer
to
help
robin
with
the
note-taking
today
robin
will
be
presenting
for
15
minutes,
but
beyond
that?
Whoever
would
like
to
dive
in
they'll
be
very
welcome,
see
watson's
volunteered.
Thank
you,
watson
for
jabba's,
scribe
yeah.
If
we
could
get
some
d2
to
assist
with
the
note
taking,
that
would
be
very
welcome
too.
A
Any
takers
we'll
need
a
note
taker
to
make
sure
that
we
don't
miss
the
notes
from
the
q
log
bit.
I
might
have
to
start
asking
people
by
name
if
we
don't
have
any
takers.
A
Jonathan,
thank
you
very
much.
You
were
the
person
I
had
in
mind
to
to
ask.
So
I'm
glad
you
did
it
first
right.
Then,
let's
get
started
with
the
session.
Hello,
everybody.
This
is
the
quick
working
group
session.
If
this
is
not
the
session
you
intended
to
join,
please
leave
now.
I'm
lucas,
I'm
one
of
the
co-chairs,
matt
who's,
just
putting
his
video
on
is,
is
my
fellow
co-chair.
This
is
the
first
session.
I
believe
that
lars
hasn't
shared
a
quick
session.
A
So
it's
it's
breaking
new
ground,
I'm
just
trying
to
see
as
lars,
even
in
the
attendees
list.
Maybe
yes,
he
is
hello
lars.
I
thought
you
might
have
escaped
permanently,
but
it
seems
you
did
want
to
come
back.
So
this
is
a
quick
working
group
session.
A
Like
I
said
we
we
wanted
to
go
through
the
usual
notewell
and
and
etc,
which
I'll
come
on
to
next,
but
also
give
a
little
bit
of
an
update
from
the
chairs
about
you
know
what
has
the
working
group
been
up
to
since
the
last
meeting
and
maybe
the
direction
of
the
group
so
just
to
you
know
we
we've
delivered
rfc
9000
and
people
are
a
bit
fragmented
across
different
drafts.
We've
lost
a
little
bit
of
focus,
but
the
working
group
is
still
active,
so
we
just
wanted
to
keep
people
up
to
date.
A
A
This
covers
the
itf
policies
that
cover
topics
such
as
patents
or
code
of
conduct
and
and
how
you
participate
in
the
ietf.
So
this
is
important
for
understanding
your
contributions
to
this
working
group
and
the
ietf
in
general.
So
please,
please,
do
go
and
attend
that.
We've
also
wanted
to
give
some
special
attention
to
the
itf
code
of
conduct
this
time
around.
So
the
you
know,
the
rc7154
lays
out
some
of
the
expectations
for
participants
in
the
itf.
A
A
I'd
like
to
say
that
the
quick
working
group
has
got
a
good
track
record
of
being
able
to
function
and
have
constructive
discussions
on
things,
so
it
doesn't
hurt
to
remind
ourselves
that
we
should
keep
that
up,
and
I
look
forward
to
a
healthy
discussion
during
this
session
in
terms
of
administrative
year.
For
this
meeting,
we've
got
the
notetakers.
Thank
you
robin
and
jonathan.
A
The
blue
sheets
are
automatically
taken
care
of
now
by
meet
echo,
so
that
saves
a
chore.
The
chat
will
be
by
meet
echo
or
java,
all
integrated
again,
no
problem
for
av.
If
you
would
like
to
present
and
or
participate
in
the
queue
to
ask
questions
or
clarifying
questions
or
make
comments,
we
we're
using
the
echo
people
should
be
familiar
with
that
by
this
point
in
the
week.
A
But
if
you're
not
there's
several
icons
in
the
top
left
hand
corner
of
the
screen,
you
have
the
raised
hand
to
get
in
the
queue
when
it's
open.
We
will
be
closing
the
queue
at
some
points.
Depending
on
how
we
manage
time,
we
can
also
unmute
your
microphone
or
your
video
for
presenters.
We're
going
to
go
with
the
screen
share
tool
built
into
meet
echo
today.
So
when
is
this
a
lot?
A
You
can
click
the
page
icon,
to
request
to
share
slides
and
it'll,
be
done
by
me,
echo,
which
is
a
little
bit
more
friendly
to
people's
streaming
bandwidth
or
if
you
really
need
to
use
share
screen
or
we
can
have
the
chairs
share
this,
the
slides
for
you
and
the
chairs
will
be
running
the
cube,
and
so
my
glamorous
assistant
matt
will
be
helping
me
on
that
front,
but
down
to
the
order
of
business
today.
Just
a
recap
of
the
agenda,
like
I
said,
we've
got
some
chair
updates.
A
After
this
administrator
section,
we
have
some
working
group
items
on
the
agenda,
so
we've
got
the
act,
frequency
draft,
then
we
have
q
log,
then
version
negotiation
followed
by
quick
ld
and
then
we're
on
to
like
a
new
work
section.
So
we're
going
to
focus
here
on
multipath
extension.
A
Mary
will
be
presenting
on
behalf
of
a
few
folks
that
have
been
working
on
that
and
had
a
side
meeting
a
couple
of
weeks
ago
and
that'll
be
the
bulk
of
our
time
in
that
section
and
then,
and
as
the
time
permits,
we'll
look
at
some.
Some
of
the
things
here
like
receive
time,
stamps
quick
v
next,
whatever
that
might
be,
or
the
zero
rtt
bdp.
So
before
we
we
get
started
on
the
agenda
in
earnest.
Is
there
any
bashing
that
anyone
would
like
to
do.
A
I
don't
see
anyone
in
the
queue
or
any
comments,
so,
okay,
we'll
just
proceed
so
I'll,
take
take
some
some
spotlight
for
the
chairs
here,
some
updates,
since
the
last
meeting
in
terms
of
the
work
that
this
group
has
adopted
and
is
trying
to
to
complete
hb
3
and
q
pack
in
case
anyone's
wondering
they're
still
in
the
rfc
editor
queue
we're
waiting
the
work
to
happen
in
the
http
core
drafts,
which
effectively
a
dependency
for
for
those
two
in
a
way
and
they're
making
progress,
they're
doing
okay,
there's
a
whole
cluster.
A
If
you're
interested
you
can
go
and
look
at,
but
we're
effectively
waiting
they've
been
there
a
long
time.
If
you
go
look,
they
were
submitted
around
february.
I
believe
so
we
we're
getting
there
slowly
in
case
people
aren't
too
familiar
with
the
deployment
strategy.
A
If
you
recall
a
few
months
ago,
as
a
group,
we
we
decided
that
people
now
that
rsc
9000
was
out,
which
is
a
quick
version,
one
that
the
http
3
alpn,
which
is
the
labeled
h3,
was
allowed
to
be
deployed
on
the
internet
and
I
believe,
we've
seen
some
decent
amount
of
live
deployments
and
interoperability
on
the
public
internet,
which
is
great.
So
I
don't
see
those
the
specs
being
in
the
rsc.
A
Editor
queue
is
preventing
progress
here
if
anyone
was
watching
the
mailing
list,
so
my
issues
have
been
identified
in
the
drafts,
while
those
things
have
been
in
the
queue
they're
kind
of
minor
things,
maybe
edge
cases,
but
some
of
the
resolutions
were
to
normative
text,
so
a
consensus
call
is
still
live.
A
I
believe
I
need
to
check
my
dates
just
on
those
for
the
working
group,
at
the
request
of
our
ad,
to
make
sure
that
the
proposals
there
that
we're
all
happy
with
incorporating
as
as
part
of
the
auth
48
changes
that
the
editors
will
do.
So
we
we've
seen
a
lot
of
good
feedback
on
that.
A
If
you
haven't
seen
them,
please
do
check
them
out
and
let
us
know
otherwise,
if
we
don't
hear
pushback,
we'll
we'll
direct
people
to
to
incorporate
the
changes
as
and
when
needed,
the
op,
strats,
the
applicability
and
manageability
or
an
ad
review.
So
thank
you
for
everyone
for
the
the
feedback
during
the
various
rounds
of
working
group
last
call
and
the
editors
for
working
with
people
on
that
to
get
it
done.
A
That's
appreciated
the
datagram
draft
also
passed
working
group
last
call,
and
that's
also
an
ad
review
and
the
grease
bit
working
group
last
call
is
completed.
We
had
some
feedback
there's
a
few
minor
changes
there,
so
we're
just
waiting
a
new
draft
and
then
we'll
do
our
shepherd
write
up
and
pass
that
on
to
the
ad.
So
thank
you
and
mine's,
just
posted
in
the
chat
hill
or
having
a
date
inbound,
brilliant,
some
related
work.
A
Sometimes
it's
it's
easy
to
assume
that
the
work
that
relates
to
crick
all
happens
here,
and
that's
not
true.
Now
that
we
are
an
rfc
people
are
excited
and
they
want
to
use
this
document.
A
All
you
know,
to
be
honest:
they've
been
excited
for
a
long
time,
so
one
of
the
things
that
did
come
to
our
attention
around
about
in
the
itf
was
the
dns
over
quick
specification
went
to
working
group
last
call
in
for
deprived-
and
I
think
a
few
of
us
from
this
working
group
posted
some
comments
that
have
been
incorporated
or
proposals
for
addressing
those
comments
have
been
incorporated.
A
There
was
a
side
meeting
for
media
over
quick,
which
is
non-working
group
that
was
formed.
I
see
in
the
chat
that
I
have
the
dates
wrong,
so
I
apologize
for
that.
We
had
a
meeting
yesterday,
jonathan
just
jumped
in
the
queue
and
then
left,
so
I
think
he
was
trying
to
draw
my
attention
to
that.
A
I
hope
I've
addressed
the
points
this
all
of
the
side
meetings
in
the
ietf
sidebeings
calendar,
so
please
check
that
for
the
actual
correct
date
and
time
and
actual
joining
instructions
so
there'll
be
a
link
to
it
was
zoomed.
Yesterday,
so
that
isn't
you
know
necessarily
strictly
anything
on
quick
we're
still
trying
to
figure
out
things
in
that
group.
That
may
be.
A
You
know
what
are
the
requirements
for
using
quick
versus
modifying,
quick
and
trying
to
understand
if
any
work
needs
to
come
back
here
or
not
so
anyone
interested
in
that
there's
some
healthy
discussion
there.
It's
kind
of
interesting,
oh
and
robin
also
mentioned,
there's
a
aside
meeting
today
about
openssl
and
it's
quick
support
for
doing
stuff.
So
if,
if
anyone's
interested
in
that
again
check
out
the
the
side
meeting
calendar
because
all
the
details
will
be
there.
A
And
then
talk
about
new
work
just
to
remind
people,
this
shouldn't
be
new
information,
but
when
it
comes
to
adopting
new
work
items,
we
have
a
charter.
That's
up
on
the
data
tracker
and
just
to
kind
of
summarize
that
the
working
group
is
the
focal
point
for
any
quick
related
work
in
the
itf.
A
The
focus
areas
are
kind
of
three
of
the
maintenance
and
evolution
of
the
quick
specs
as
and
when
we
need
to
to
do
things
that
you
know
improve
upon
or
fix
or
iterate
upon
drafts
that
have
been
adopted,
items
and
and
been
published.
So
that
should
be
fairly
obvious.
Other
things
about
supporting
the
deployability,
which
relates
to
drafts
like
ops,
strats
or
queue
log
or
low
balances,
those
level
of
things.
A
So
we
can
support
non-protocol,
specific
work
items
too
if
they
make
sense
for
us
to
do
things
and
then
new
extensions
to
quick,
so
datagram's
an
example
of
an
extension
and
we've
got
another
one
coming
up
later
in
this
session.
But
as
I
mentioned,
dns
of
a
quick
is
an
example
of
how
an
application
protocol
could
use
quick
and
that
work
was
done
outside
our
working
group.
A
It
could
be
done
here
if
we
really
need
it
if
that
suitable
home
couldn't
be
found,
but
I
would
hope
that
most
most
of
the
discussion
in
such
specifications
relates
to
the
application
protocol
and
that
we
have
an
awareness
of
the
work
that's
happening
there,
so
we
can
on
you
know.
People
from
this
working
group
can
go
and
review
with
a
with
their
quick
head
on
and
maybe
give
some
feedback
about
those
aspects
of
things
and
then
defining
new
congestion.
Controls
is
outscope.
A
So
please
don't
come
with
those
things,
because
we'll
have
to
point
you
somewhere
else,
but
specifically
to
this
session
and
yeah.
A
We
we're
going
to
talk
about
multipath,
quick
extensions
from
the
background
here
without
trying
to
preempt
too
much
of
what
mirror
talk
about
later
is
that
we
had
a
interim
focused
on
multi-path
like
a
year
ago,
which
you
know
multipath
was
in
scope
for
the
initial
charter
that
this
working
group
had,
and
we
d
scoped
it
into
into
the
current
form
that
we
have
today-
and
there
was
a
lot
of,
I
think,
a
lot
of
ideas
around
what
multipath
could
be
what
different
problems
they
were
trying
to
solve,
and
so
we
had
that
meeting
last
year,
which
I
think
helped
but
didn't
get
us
into
the
right
position,
and
we
still
had
a
lot
of
work
on
z
has
gone
on.
A
We
we've
we've
cleared
things
from
our
our
agenda
and
we
have
some
more
time,
but
we
also
have
a
clear
idea
of
the
problems
and
people
have
come
together
for
a
more
unified
proposal.
So
we'll
see
how
the
working
group
think
on
that
and
we'll
be
taking,
the
chairs
will
be
taking
an
assessment
about
whether
we
think
that
work
is
is
worth
adopting
in
its
current
form.
A
A
So
that's
the
end
of
of
the
quick
chairs
session
next
step
on
the
agenda
is
ack
frequency,
so
I
penciled
in
jonah
to
present
yeah,
but
I
guess
maybe
it
might
be
ian
too,
which
one
of
you
is
it.
B
Let's
do
that
I'll
chat
with
you
in
the
back,
but
if
he's
not
able
to
run,
obviously
I'm
happy
to
do
it
yeah
just
a
little
bit
noisy
for
me.
Oh
there
he.
A
C
Better
lunch,
let
me
get
my
headphones
on.
C
C
C
C
C
C
There
we
go
cool,
so
yeah
we're
we're
kind
of
hoping
to
wrap
up
the
act.
Frequency
draft
in
a
relatively
near
future,
there's
one
fairly
substantial
design
issue.
That's
outstanding
that
I'm
going
to
highlight
in
these
slides,
but
first
I'm
going
to
start
with
kind
of
an
overview
of
where
the
frame
is
at,
and
you
know
and
talk
through
that
and
if
there
are
any
questions,
obviously
feel
free
to
interject.
C
Obviously,
this
a
type
there's
a
sequence
number,
so
if
you
receive
them
out
of
order,
this
is
allowed
still
to
retransmit.
Any
frame
you
would
like
verbatim
is
possible
that
a
sender
that
does
this
could
receive
a
frequency
that
arrives
out
of
order,
so
that
fixes
that
problem.
C
The
accolading
threshold,
of
course,
is
the
number
of
packets
that
technically
it's
not
the
number,
the
maximum
number
of
active
listing
packets
that
the
recipient
of
this
frame
can
receive
prior
or
before
sending
an
acknowledgement.
C
C
So
that's
the
value
you'd,
like
the
peer
to
delay,
an
acknowled,
an
immediate
acknowledgement
for
sorry
delay,
an
acknowledgement
for
upon
and
perceiving
an
app
listening
packet.
Immediate
acknowledgements
are
totally
different.
Yes,
sorry,
I
need
to
do
something
later.
Thanks,
sir
sir,
the
the
ignorancy
field
is
ignoring
congestion
experience.
This
was
added
both
as
a
result
of
conversations
about
how
one
might
use
ecn
in
different
or
ecn
marking
in
different
contexts.
C
C
In
order
to
you
can
have
your
video
or
multiple
packets,
so
we
did
add
some
caveats
there
about
like
the
safety
of
these
and
how
you
should
really
be
using
these
and
and
those
are
in
the
draft
now
so
next
slides.
C
C
Freezing
and
and
state
problem
statement
were
much
better
than
mine
was,
and
so
the
issue
is
that
one
act
is
sent
immediately
today,
just
like
with
p1,
but
after
that
the
next
act
will
not
be
sent
until
the
eliciting
threshold
of
the
act
layer
hit
and
there's
a
lot
of
situations
where
that
might
slow
down
the
packet
threshold
lost
detection.
So
the
kind
of
example
is
you
have
a
drop,
you
send
an
immediate
act,
and
then
you
don't
send
another
act
for
say
10
or
20
packets.
C
It's
it's
pretty
easy
to
hit
a
situation
where
you
know.
If
you
send
another
act,
you
know
two
packets
later,
you
could
have
immediately
declared
a
loss
and
moved
on,
but
instead
you're
waiting
say
like
a
quarter,
rtt
rtt
for
the
the
time
threshold
to
hit
in
data
centers.
This
kind
of
could
be
worse,
unfortunately,
because
data
centers
tend
to
have
alarm
granularities,
which
are
not
amazing
and
commonly
the
the
thresholds.
We're
talking
about
according
to
t
and
eight
that
are
tv,
are
literally
unachievable
in,
like
a
micro
center.
C
I'm
sorry,
a
micro
micro,
second
scale,
data
center
networking
environment.
So
this,
this
probably
is
actually
a
worse
problem
in
a
data
center
than
the
public
internet,
but
anyway,
that's
kind
of
the
outline
of
the
problem,
and
the
most
important
conclusion
is
that
loss
detection
latency
has
the
potential
to
be
worse
than
in
quick
v1.
C
So
the
the
proposal
here
is
to
communicate
the
reordering
threshold
to
the
receiver.
Instead
of
this
ignore
order
that
we
we
have
right
now
and
the
algorithm
comes
down
to
that.
The
receiver
needs
to
send
an
immediate
acknowledgement
whenever
they're
missing
packets,
somewhere
between
the
largest
acknowledged
value
that
has
been
sent
in
a
previous
acknowledgement.
C
Mastery
ordering
threshold
and
the
largest
acknowledged
in
the
that
you
know
currently
has
been
received
at
this
moment
from
a
more
recently
received
packet
minus
the
reordering
threshold,
so
that
guarantees
that,
if
they're,
basically
any
packet
in
that
range,
that
is
missing.
Assuming
that
the
peer
is
correctly
communicated
their
reiterating
threshold
to
you.
That
means
that
they
can
immediately
declare
a
loss.
The
moment
they
receive
your
immediate
acknowledgement.
C
So
the
result
is
that
it
reduces
the
number
of
acts
when
people
packets
are
received
out
of
order
when
compared
to
quickv1,
while
also
improving
the
loss,
detection,
latency
versus
quick
v1,
because
there's
a
number
of
cases
now
before
I
put
feet
one
because
you
act
every
other
packet,
it
was
possible.
C
They
had
a
circumstance
where
you
you
didn't
receive
that
final
packet
threshold
act
say
because
the
you
know
it
was
two
packets
later,
not
three
or
you
know
you
kind
of
an
off
by
one
error
and
it
would
increase
loss,
detection,
latency
or
slightly
versus
this
approach.
C
So
ideally,
this
gives
you
the
best
of
both
worlds.
The
major
you
know
the
major
negative
is.
It
definitely
increases
implementation
cost
by
a
slight
margin,
because
you
have
to
implement
this.
This
algorithm
right
here
so
yeah.
So
that's
that's
the
item
to
discuss.
Do
people
have
thoughts.
This
pr
has
been
kind
of
out
there
for
a
while
and
as
there's
the
issue,
yeah
christian
christian,
please
go.
F
Should
we
skip,
can
you
hear
me
now,
can
you
can
you
hear
me
now
yeah?
Okay,
that's
a
weirdness
with
the
microphone
too
many
buttons
yeah,
I'm
not
sure
I
mean
you're,
proposing
to
add
a
weirdo
in
threshold
based
on
packet
counts
and
I
am
not
sure
that's
the
right
tool,
the
the
reason
I
say
that
is
that
we
have
tried
a
lot
of
that.
F
C
F
C
I
would
agree
there
are
circumstances
where
this
is
not
actually
a
helpful
mechanism,
and
so
we
did
maintain
the
existing
kind
of
ignore
order.
Functionality,
if
you
send
a
reordering
threshold
of
zero,
basically
means
the
existing
order.
Functionality
yeah,
I
think
it.
It
really
depends
on
the
circumstance.
C
So
this,
I
guess,
is
really
about
giving
the
sender
which,
as
described
in
the
recovery
draft,
the
last
detection
congestion
total
draft
there,
there
are
two
essential
mechanisms
for
declaring
loss
there
right.
One
is
reiterating
threshold
impact
accounts
and
one
is
in
time,
and
so
this
is
giving
the
sender
the
maximum
number
of
tools
to
express
sort
of
those
two
mechanisms
between
the
the
ipad
and
the
rendering
threshold,
but
but
you're
right
that
this
may
not
always
be
applicable.
F
F
F
Now
the
packet
sequence
number
is
something
which
is
very
hard
to
use
in
practice,
because
first
I
mean
the
the
size
of
the
loss
in
the
packet.
Sequence
number
vary
with
the
conjugation
window
and
second,
the
it
also
depends
on
things
like
numbers
keeping
by
the
sender
and
so
what
you
have
there.
You
have
a
primary
arc
delay
you
might
want
to
have
an
acquiring
delay
or
something
like
that.
C
Yeah,
that's
that's
an
interesting
point.
You
are
right
that
I'm
mentally
kind
of
conflating
the
two
as
being
sufficiently
similar
to
to
serve
both
purposes
and
you're,
right
that
it's
possible
they're
not
sufficiently
similar
but
yeah.
C
B
If
it's
all
right,
I
mean
I,
I
think
I
don't.
I
don't
quite
understand
the
issue
that
christine's
having
at
high
level.
Basically
packet
threshold
is
already
there
and
the
problem
with
this
particular
with
with
the
with
the
act
delays
that
we
effectively
did
not
have
packet
to
show
lost
detection
anymore
and
what
this
proposal
does.
Is
it
brings
it
back?
B
So
it's
not.
They
were
bringing
a
new
new
mechanism
here
we
are
simply
reinstating
a
mechanism
that
we
lost
because
of
this
extension
does
that
make
sense
to
your
christian?
I'm
not
I'm
not.
Quite
maybe
I
mean
it's
not
saying
we'll
have
to
discuss
this
on
the
on
the
issue
and
others
just
bringing
it
up.
B
B
D
Okay,
so
wherever
I
I
think,
maybe
I
can
understand
what
you're
trying
to
do,
I'm
not
sure
about
the
first
subtraction
here
I'd
like
a
picture,
because
I'm
not
smart
enough
to
do
this.
The
purpose
of
this
discussion
was
so
that
people
could
understand
what's
going
on,
and
I
suspect
this
is
probably
something
along
the
lines
of
the
right
solution
here,
but
I
don't
know-
and
so
that
way.
D
B
I
think
part
of
our
goal
here
was
to
bring
attention
to
this
issue,
because
it's
the
only
design
issue
that's
left
and
pending,
and
I
think
that
either
ways
we'll
be
able
to
come
to
a
decision
on
it
once
we
get
people's
eyes
and
minds
staring
at
it
so
yeah
I
mean
I
think,
we'd
be
happy
to
do
something
on
email,
if
that's
necessary
and
you'd
be
a
restating
of
what's
on
the
issue
already,
but
that's
happily
done.
C
G
Oh
thanks
ian,
I
think
it's
better
reordering
threshold
is
better
than
ignore
order.
This
gives
you
a
little
bit
more
options
than
you
know,
just
true
and
false,
but
as
I
I
think,
christian
what
he
said,
I
think
I
agree
with
him.
So
if
there's
a
reordering
threshold,
it
can
be
either
in
time
or
in
packets.
I
I
think
maybe
we
can
make
this
field
either
two
fields
or
make
this
make
a
way
to
distinguish
whether
we
are
setting
this
in
time
or
packets,
but.
G
So
I'm
not
sure
if
the
the
guidance
and
the
how
how
the
sender
is
setting
this,
if
that
there's
guidance
for
how
to
set
this
in
the
draft.
C
I
I
think,
there's
no
additional
guidance.
I
thought
we
added
guidance
recently
janna,
I
think,
wrote
a
pr
to
add
guidance
for
other
use
cases,
and
this
would
apply,
and
so
we'd
have
to
make
sure
that
guidance
got
kind
of
applied
to
this.
B
Yeah,
I
don't
think
we
have,
I
mean
the
pr
the
pr
for
this
is
not
fully
fleshed
out.
Yet
I
think
that's
that's,
definitely
good
feedback.
I
think
we
need
to
have
something.
That's
more
precise
in
terms
of
what
does
an
endpoint
do
if
they
don't
have
anything
better,
that
they
can
use
up,
they
don't
have
having
a
a
default
in.
There
is
going
to
be
very
useful
and
I
I
think
we
can.
We
can
default
back
to
quickv1
defaults,
but
it'll
be
good
to
state
that
explicitly
that
people
know
what
to
do.
I
Hi
thanks
for
doing
this,
and
I
do
see
it
progressing,
that's
really
good.
I
have
two
questions.
The
first
one
I
think
you've
already
touched
on
which
I
think
there
is
some
element
of
safety
in
here.
If
you
get
these
numbers
wildly
wrong,
then
there
is
a
congestion
control
implication,
so
I
would
really
love
to
see
some
sort
of
safety
considerations
or
recommended
maximums
or
something
we
can.
Maybe
we
can
discuss
that
later
on
the
issue.
Does
that
seem
reasonable.
C
Yeah
yeah
we
can,
we
can
either
discuss
it
on
this
issue
or
on
on
other
issues.
Yeah.
Let
me
yeah,
I'm
happy.
It
needs
to
be
discussed.
I
think
we
all
agree.
I
just
want
to
make
sure
we
get
the
right
text
in
there
and
it
applies
to
all
the
various
mechanisms
we
have
and
it's
kind
of
a
global
like
recommendation,
so
I
think
we're
all.
On
the
same
page,
it's
just
a
matter
of
making
sure
we
actually
get
the
right
tax
rate
down
so
yeah.
That's.
I
C
I
I
asked
that
because
I
think
it
probably
is
better
handles
a
separate
thing.
Once
we've
got
the
issues
pinned
down
right.
C
I
And
then
my
second
point
is
is
even
more
crazy
than
I
think
what
video
raised
I
mean.
I'm
not
convinced
that
ignore
ce
is
in
any
way.
What
the
people
who
are
designing
l4s
are
imagining,
and
I
wonder
what
the
impact
of
actually
deploying
an
ignore
ce
thing,
without
some
very
strong
guidance
would
be
on
the
whole
l4s
thing.
I
Or
we
should
just
formula
a
little
bit
of
a
separate
issue
that
they
can
jump
into
perhaps
on
that
one
yeah.
B
Add
something
that
quickly
to
gauri
the
this
isn't
ignore
ce
is
not
really
designed
to
handle
l4s,
it's
more
of
an
escape
hatch
for
those
for
for
for
l4s,
in
the
sense
that
if
you
have,
if
you
don't
have
any
acts
coming
back,
you
don't
have
any
accurate,
ecm
feedback,
and
so
this
this
sort
of
currently
defeats
any
accuracy
that
you
might
want
to
do.
So
it's
an
escape
hatch.
It's
not
precisely
defined
as
a
solution
for
l4s
just
to
be
clear.
I
A
I
ask
you
politely
to
keep
up
this
healthy
discussion
on
the
list
and
the
issues.
Thank
you
ian
for
the
presentation.
Your
last
slide
talked
about
working
group.
Best
call,
it
seems,
there's
there's
still
a
few
well,
this
design
issue
and
a
few
more
issues
to
go.
Another
question
the
chair
is
probably
asking
is:
is
about
implementation
and
interoperability
of
the
act
frequency.
A
I
know
there
has
been
some,
but
if,
if
other
people
are
doing
that,
that
would
help
us
understand
how
mature
this
draft
is
and
and
how
good
it
is.
But
beyond
that,
I
think
we'll
need
to
move
on
to
the
next
slide.
So
once
again,
thank
you.
Ian
you're
welcome
thanks
robin's
up
next
with
q
log.
Do
you
wanna
there?
We
go
the
system
works.
Thank
you.
Go
ahead,
robin.
J
Yeah,
I'm
intelligible.
I
hope,
since
we
had
a
few
issues
with
that
on
monday,
so
it's
short
other
than
kilo,
because
not
that
much
happened
for
the
latest
draft
in
the
past
three
months.
The
main
thing
we
did
was
move
to
a
different
serialization
format
and
just
to
give
a
little
bit
of
context
on
that.
As
you
know,
we
we
use
json
for
the
main
format,
which
is
very
nice.
It's
well
supported.
The
main
downside
is
that
it's
quite
not
very
robust
in
most
of
the
parsers,
for
example.
J
Here,
if
you
would
forget
the
final
square
bracket
in
this
example,
most
of
the
parcels
would
simply
break
they,
wouldn't
even
give
you
a
partial
result
there.
This
means
typical
json
is
not
very
usable
for
streaming
cases,
because
there
you
often
might
not
have
the
option
to
properly
finish
the
json
file,
or,
let's
say,
if
your
implementation
crashes
before
they
can
finish
the
full
log.
J
So
what
we
did
initially
was
define
a
separate
additional
format.
Next
to
that,
for
which
we
chose
newline
delimited
json,
which
is
exactly
what
it
sounds
like
you
just
instead
of
having
proper
json,
you
just
have
each
json
object
as
a
separate
line,
and
you
can
just
use
the
new
line
as
a
delimiter,
which
is
very
simple
works
quite
well.
J
J
Basically,
a
smaller
downside
is
that
you
can't
have
new
lines
inside
of
the
json
itself,
which
is
a
bit
annoying
if
you're
doing
like
manual
grabbing
or
actually
eyeballing
the
q
log
files,
because
these
these
events
are
typically
very
very
long
if
you
do
them
on
a
single
line.
J
J
You're
now
using
the
existing
ascii
record
separator
character,
so
this
is
not
actually
a
string,
so
it's
it's
just
because
it's
a
binary
character,
it's
difficult
to
visualize
in
the
slide,
so
the
rs
with
the
brackets
is
actually
just
a
single
unicode
code
point
indicating
the
record-separated
character.
J
So
the
nice
thing
there
is
this
is
properly
defined
in
rc.
It's
properly
defined
how
to
how
to
use
it,
how
it
maps
to
to
scratch
that
and
we
can
have
new
lines
inside
of
events,
which
is
also
quite
nice.
The
main
downside
was
this
seems
to
be
relatively
esoteric.
J
It
isn't,
as
broadly
supported
as
newline
delimited
json,
despite
being
a
standard,
and
so
we
decided.
We
first
want
to
get
some
practical
experience
with
this
before
we
switch
the
the
q
log
format
over
to
that,
which
is
what
we
did,
and
so
it
turns
out.
It's
actually
quite.
J
Quite
doable,
if
you're
already
doing
nd
json,
it's
very
easy
to
move
over
to
json
text
sequences
in
practice,
and
we
also
tested
various
existing
tool,
suites
or
tools
that
people
were
using
for
qlog,
like
jq
and
all
kinds
of
linux,
based
text,
processing
tools,
and
they
all
seem
to
work
just
fine
with
this
new
format,
and
so
that's
the
reason
why
we
decided:
let's
go
for
it,
and
so
that's
the
main
change
in
draft
one.
We
moved
from
ndjs
onto
this
with
that
some
fewer
additional
minor
changes.
We
have
semantic
versioning.
J
J
We
are
still
supporting
both
json
and
json
text
sequences,
so
non-streaming
file
based
and
streaming,
let's
say,
but
if
you
were
for
some
reason
waiting
for
this
change
to
be
made
to
update
your
own
implementation,
I
think
it's
fair
to
say
that
we'll
likely
stay
with
this
streaming
format.
So
if
you
were
waiting
for
this,
you
can
probably
go
ahead
and
start
implementing
that.
J
So
that's
since
last
time.
What
are
we
planning
for
the
near
future
by
itf
113?
We're
hoping
to
mainly
do
some
editorial
work
and
the
main
thing
there
is
that
in
the
current
drafts,
all
of
our
events
we've
tried
to
properly
define
them
which
fields
are
on
which
events
and
which
type
are
those
fields.
J
But
for
those
we
are
currently
using
the
typescript
data
definition
format,
and
here
we
have
a
very
similar
situation
to
nd
json,
it's
usable,
but
it's
no,
it's
not
really
well
defined
anywhere,
and
it's
not
an
rfc,
and
so
here
again
we
also
want
to
make
the
move
to
something
a
bit
more
itf
centric,
which
is
called
the
concise
data
definition.
Language
cddl,
as
you
can
see
it's
it's
very
similar
in
concepts.
J
So
the
goal
is
there
to
hopefully
have
something
like
in
tools
like
qvis,
that
you
upload
a
q
log
file,
and
then
it
can
tell
you
exactly
if
you
have
any
errors
in
there,
misspelled
fields,
wrong
types
or
fields,
and
so
on
and
so
forth,
which
is
something
we've
seen
surface
a
little
bit
in
the
in
the
past
months
for
some
people.
So
that's
kind
of
the
idea.
It's
both
to
make
everything
a
bit
more
neat,
itf
style,
but
also
be
more
robust
towards
the
future
when
we
start
adding
new
stuff.
J
Our
main
goals
are
to
basically
flash
out
what
we
have.
We
are
a
bit
lacking
in
tls
and
qpec
stuff
at
this
moment,
so
we
want
to
add
those
things,
and
then
we
have
a
few
events
that
need
to
be
extended
or
touched
up
to
be
a
bit
clearer.
And
then
there
are
some
proposals
to
add
more
high-level,
generalized
stuff
like
indicating
which
cpu
or
which
tread
a
certain
event
comes
from
that
kind
of
stuff
that
other
implementations,
like
microsoft,
use
in
their
custom
logging
format.
J
That
might
be
useful
down
the
line
and
that's
basically
it
like.
Last
time
we
we
had.
I
had
in
my
slides,
like
a
desire
to
have
like
a
a
global
design
guideline
or
or
a
design
framework,
so
that
people
could
add
new
events
and
new
protocols
even
to
qlog
and
so
on
and
so
forth.
And
since
we've
had
a
lot
of
discussion
about
that,
and
I
think
most
people
there
agree
that
this
is
a
bit
of
a
utopia
that
it
would
be
quite
difficult
to
achieve
this.
J
We
need
to
think
about
how
to
indicate
which
protocols
and
which
events
are
in
which
file
how
to
make
it
possible
to
change
existing
events.
For
example,
add
new
transport
parameters
into
the
existing
transport
parameter
event
that
we
have
and
so
on
and
so
forth,
and
then
we
should
also
have
a
discussion
about.
Let's
say
we
want
a
q
log
log
events
for
rap
transport.
J
How
do
we
do
that?
Is
that
a
separate
draft?
Is
it
a
fully
separate
thing?
Where
does
that
live
and
so
on
and
so
forth?
So
a
few
of
those
practical
issues.
Those
are
not
things
that
are
very
pressing
at
this
moment,
but
it
would
be
nice
to
have
an
idea
of
what
people
think
about
this
or
to
have
some
proposals
of
this
by
itf
113,
so
that
we
can
start
the
discussion
in
earnest
by
then.
K
Thank
you
for
the
presentation
and
thank
you
for
making
the
changes
regarding
the
file
format.
I
prefer
choosing
one
rather
than
having
support
for
two,
because,
in
my
opinion,
you
know
supporting
two
is
always
more
complex
than
supporting
one
and
writing
a
conversion
tool
to
the
other.
J
Yeah,
but
it's
even
if
we
have
two
formats,
it's
still
possible
to
just
implement
one
and
then
convert
it
to
the
other.
If
you,
if
you
want
right,
I'm
not
sure
I
understand
the
question.
K
Right
I
mean
if,
if
there's
only
one
format
defined
then,
for
example,
I
can
create
a
pool
that
only
supports
that
format.
But
if
there
are
two
formats
being
defined,
I
I'm
you
know
I'm
forced
to
write
a
converter
or
write
at
least
write
a
test
to
support
both
of
those
two
and
that's
a
complexity.
I
think
for
everyone,
every
other
one.
A
M
All
right
regarding
the
rs
separated
keylog.
M
Are
they
semantics
of
aggregating
multiple
logs
into
a
single
created
log,
going
to
be
understood
by
the
tools
or
is
it
something
that's
undefined
behavior.
J
H
J
You
have
multiple:
if
you
have
the
normal
json
one,
we
can
append
separate
traces
as
individual
objects.
That's
much
more
difficult
in
streaming
format.
If
you
want
to
do
that
in
streaming
format,
you
would
have
to
indicate
for
each
individual
event
to
which
sub-trace
it
it
belongs.
Otherwise,
it's
impossible
to
de-multiplex
them
afterwards.
J
But
finalizing
on
that,
that
is
something
that
qvis,
for
example.
Also
already,
does
it's
not
that
it's
very
complex,
it's
just
a
different
way
of
doing
things.
A
Oh
well,
it
doesn't
seem
like
we've
got
anyone
else
in
the
queue.
There's
some
some
good
progress
here
and
some
good
progress
to
come.
I
did
see
in
the
chat
christine
mentioned
multipath
support,
I
think
you
know
sometimes
defining
these
events
takes
some
thinking,
but
it
isn't
too
hard
it's
more
the.
A
What
should
what
makes
sense
to
put
in
and
what
are
people
willing
to
implement,
because
you
can
define
everything
and
if
no
one
implements
them,
then
you,
you
can
end
up
with
some
interop
issues
like
like
we
found
in
the
hackathon
last
week
when
we
are
consuming
each
other's
q
logs.
So
I
think
there's
there's
some
some
good
work
to
come
up
and
look
forward
to
that
progress
in
the
future.
J
For
just
for
multipath,
I
was
intentionally
waiting
a
bit
for
the
unified
proposal
right
to
know
what
what
direction
we're
going
and
once
that's
settled
down.
I
I
think
it
should
be
relatively
easy
to
add
at
least
provisional
multi-part
events
and
then
also
have
some
basic.
Let's
say,
cubist
tooling
around
that
to
help
with
multiple
debugging.
A
Yeah
cool
well,
thank
you
robin
and
with
that
we
can
move
on
to
vision,
negotiation.
So
david,
would
you
like
to
step
on
up?
Please.
H
Cool
thanks,
so
my
name
is
david
kanazi
and
I'm
here
to
very
very
briefly
give
folks
an
update
on
the
version
negotiation
draft,
so
a
very
accelerated
history
we
used
to
have
vn
in
the
core
specs.
We
then
decided
to
remove
it
and
split
it
to
its
own
draft.
This
is
that
draft.
H
We
were
went
around
in
circles
for
a
bit
on
it
because
it
was
too
complicated.
Then
mt
came
around
with
a
simpler
design
that
folks
liked
that's
in
the
draft
today
and
then
the
question
is:
where
do
we
go
from
here?
So
far
the
the
new
simplified
design,
I've
haven't
heard
anyone
said
they
dislike
it,
so
we're
in
good
shape.
H
We
still
have
a
lot
of
minor
issues
on
the
on
the
document.
It's
more
that
just
the
the
editors
haven't
been
focusing
on
it,
so
we
haven't
made
good
progress
on
those,
but
like
all
the
ones,
I
did
a
quick
pass
on
those
last
week
and
it
really
looked
like
there
was
agreement
on
how
to
resolve
all
of
them.
H
The
question
is
kind
of
what
is
the
status
with
implementations
for
the
longest
time
we
didn't
have
one.
I
just
just
on
monday,
I
implemented
it
in
our
stack,
not
the
compatible
part,
but
the
at
least
the
version
downgrade
part-
and
it
was
very
straightforward
and
kind
of
the
question
for
the
working
group
is:
where
do
we
want
to
go
from
here?
E
H
Want
to
wait
until
we
have
another
version,
so
we
can
like
test
the
compatibility
bits
at
scale
before
we
ship
this
we'd
kind
of
like
to
get
a
some
sort
of
a
timeline,
because
you
know
that
if
this
needs
to
get
published
soon,
then
that
it's
worth
it
for
the
editors
to
prioritize
like
fixing
these
editorial
things.
Otherwise
we
can
kind
of
kick.
This
can
down
the
road
until
another
version,
so
thoughts
questions.
What
do
we
do
now?
This
is
the
end
of
my
presentation.
H
Crickets,
so
if
one
thing
is,
please
interrupt
with
my
server,
I
put
it
on
the
general
channel
of
the
slack
just
want
to
make
sure
that
you
know
we.
I
implemented
it
correctly,
but
apart
from
that,
given
that
there's
not
too
much
excitement
about
this,
I
guess
maybe
we
just
kind
of
kicked
the
can
down
the
road
a
bit
further
matt.
Do
you
have
any
thoughts.
L
Yeah,
I
was
just
gonna,
you
know,
hearing
the
the
deafening
silence
from
the
working
group.
I
think,
like
everyone
agrees,
this
is
something
we
need
it,
but
it
doesn't
seem
like
there's
a
lot
of
implementation
appetite
at
the
moment.
I
think
for
something
like
this.
We
do
want
implementations
and
interop
before
we
like,
you,
know
kind
of
shepherd
it
through.
So
just
everyone
keep
that
in
mind
that
if,
if
you
want
this
in
the
future,
if
you
see
this
being
something
that
you
want
in
the
future,
please
implement
it
sooner
rather
than
later.
B
Hey
thanks
for
the
presentation
dude
just
very
quickly,
it
seems
to
me
that
it
might
actually
be
useful
to
get
some
implementation
deployment
experience
along
with
a
different
version.
So
I
wonder
if
the
quick
v2
discussion
that
we
are
about
to
have
later
might
play
into
how
quickly
we
move
on
this
draft.
H
That
that
makes
sense,
because
if
we
have
a
v2
that
is
compatible
with
the
first
one
that'll
help
us
to
use
this
at
scale.
Also,
we
won't
be
able
to
ship
a
v2
without
a
version
negotiation
mechanism
with
v1
so
like
yeah,
maybe
these
become
a
joint
cluster
and
I
think
the
quick
v2
is
on
the
s
time
permits.
So
in
that
case
I'll
just
say,
let's
punch
for
now,
but
let's
see
what
comes
out
of
the
v2
discussion.
L
All
right,
since
I'm
already
on
the
mic,
yeah
mark
I've,
as
we
can
see
martin's
coming
up
with
quick
lb,
so
go
ahead.
Martin
yeah.
E
In
the
staying
in
the
vein
of
things
that
people
sort
of
support,
but
are
a
little
bit,
can
I
get
a
clock
reset,
please
that
think
that
people
that
things
that
people
support,
but
there
aren't
a
lot
of
implementations,
quick
lb.
E
E
Too
many
choices,
and
I
agreed
with
that
and
then,
of
course
the
other
concerns
there
weren't
a
ton
of
implementations,
particularly
on
the
server
side
and
no
interop
activity
to
date
that
I'm
aware
of
so
some
things
have
happened
here.
Christian
uitima
has
been
incredibly
valuable
and
fixing
the
now
very
much
misnamed
stream
site
for
algorithm
into
something
that
is
a
little
more
secure.
E
We've
kind
of
just
cleaned
up
some
of
the
nomenclature
to
kind
of
make
it
less
three
completely
disjoint
algorithms,
with
just
three
just
three
different
sets
of
options
to
kind
of
the
same
core
structure.
So
it's
just
a
simplification
thing.
E
An
ad
a
couple
drafts
ago
was
dynamic,
server
id
allocation,
not
preconfiguring
that,
and
I
think
we've
reached
consensus
that
it's
a
little
too
cute
and
there's
there's
too
many
difficult
mechanisms
associated
with
that
that's
gone
again,
and
then
I've
heard
some
good
news
about
implementations,
but
I
haven't
seen
anything
concrete
yet
again
like
I
know
there
are
a
lot
of
server
implementers
in
the
room,
and
that
is
where
we
are
hurting
most.
E
E
There
we
go
okay,
so
this
is
kind
of
my
my
dream
on
how
this
can
move
forward
chris
and
I
have
asked
for
a
cfrg
review
of
the
the
stream
cipher
algorithm
since
it's
since
christian
came
up
with
something
was
kind
of
a
variant
of
effect,
so
we've
kind
of
rolled
our
own
crypto,
so
we're
going
through
that
process
now
not
sure
exactly
what
the
what
the
reaction
will
be
if
that
goes
well,
and
it
shows
that
we
have
the
same
or
similar
security
properties
to
the
block
cipher
algorithm,
it's
not
clear
to
me
what
the
value
of
the
block
cipher
algorithm
is.
E
There's
an
issue
in
the
github.
If
you
want
to
weigh
in
on
this,
but
essentially
you'd
be
getting
longer
c
ids,
the
only
benefit
would
be
fewer
crypto
passes,
which
seems
like
a
not
that
strong
advantage,
and
then
that
would
dramatically
simplify
that
the
document
essentially
have
a
like
a
plain
text
version
and
the
cirtex
version
of
the
same
thing,
so
that
would
be
that'd
be
as
simple
as
you
could
get.
I
really
would
like
to
split
the
load
balancer
portion
of
this
and
the
retry
service
portion
of
this.
E
I
view
them
as
entirely
orthogonal
with
each
other.
The
load,
balancer
thing's,
essentially
version
invariant
with
a
few
caveats
and
then
retry
service
thing
really
is
not,
and
I'm
getting
proposals
for
things
like
stateless
reset
offload
and
so
on,
and
they
all
just
have
like
the
common
theme
of
middle
box
coordination.
But
I
don't
know
if
that's
a
strong
enough
thread
to
hold
these
two
things
together.
E
I've
also
just
kind
of
become
enamored
of
the
idea
that,
like
smaller
drafts,
are
better
because
people
read
them
and
when
there's
a
lot
of
ancillary
stuff
that
half
people
don't
care
about
just
makes
things
worse.
So
I
don't
know
if
people's
strong
opinions
about
that
I'll
be
happy
to
hear
from
it
then
after,
like
if
all
this
goes
according
to
plan,
then
we
do
another
editorial
pass.
Then
it'd
be
done
as
it's
going
to
be
done.
E
That's
just
the
question
of
getting
a
couple
implementations
that
are
up
in
them
and
then
we're
done
so,
let's
see
do
I
have
another
slide?
Yes,
I
don't
know
if
you
want
to
talk
about
the
block
cid
stuff
right
now
we
can
get
into
that.
If
you
want
so
I'll,
just
open
the
floor
for
comments.
C
Ian
I,
the
only
thing
I
had
to
share
is
that
I
now
that
we've
kind
of
worked
our
way
through
our
backlog
of
idea
of
quick
work
in
other
areas.
We
are
very
excited
to
hopefully
take
this
on
in
the
first
half
of
2022,
but
there's
still
some
details
to
be
worked
out
in
terms
of
who's
going
to
do
the
work
and
everything,
but
the
use
cases
are
there
and
everything,
but
that
to
be
said
to
be
clear.
C
That's
like
the
stream
cipher
protocol
with
like
the
key
rotation
and
all
of
those
sort
of
mechanisms,
the
key
exchange
portion
and
all
that
stuff.
I
don't.
We
probably
will
support
at
some
point
in
the
future,
but
that's
like
more
of
a
a
year
from
now
or
more
probably
sort
of
thing.
So
I
think
that
probably
supports
the
idea
that
at
least
in
our
case,
you're
gonna
get
a
deployment
on
one
of
them
a
lot
earlier
than
you're
going
to
get
deployment
on
the
other
half.
But
I
don't
know.
E
Are
you
talking
about
the
retry
service
versus
the
the
load?
Balancing?
Yes,
sorry,
that's
right!
Okay,
yeah!
All
right
super!
Thank
you
yeah!
I
mean
like
that's
the
other
thing
like.
If
we
get
a
lot
of
implications
in
one,
I
anticipate
the
implementations
of
one
of
those
components
versus
the
other
to
be
very
asymmetric,
and
if
one's
ready
to
go
I'd,
just
like
to
move
it
and
at
some
point
I'll
I'll
produce
a
mock-up
of
what
these
book
things
would
look
like
when
people
shoot
at
it.
Lucas.
A
E
I
I
I
brought
it
up
like
a
long
time
ago
and
got
like
I
think
it's
like
five
or
six
weeks
ago
and
got
some
negative
like
a
little
bit
negative
pushback,
I'm
it
wasn't
super
strong,
I'm
become
more
convinced
this,
I'm
bringing
it
up
again,
and
you
know
I
don't.
I
don't
think
I
haven't
really
discussed
this
on
the
list
much
recently.
E
I
will
do
so,
but
this
is
my
I'm
kind
of
introducing
the
topic
now
frankly
to
allow
people
to
comment.
A
Okay,
that's
cool.
I
think
that
the
list
is
a
good
venue
to
continue
that.
I
think
I
I
I'm
trying
hat
off
in
personal
opinion.
If
splitting
them
is
going
to
help
make
some
progress
on
on
the
one
thing
that
people
really
do
want
to
implement.
That
seems
like
a
benefit,
but
I'm
I'm
not
fully.
Okay
with
with
lb
to
know
what
the
potential
downside
might
be
for
that
split.
E
I
mean
they're
functionally
completely,
I
mean
they
can,
in
theory,
be
implemented
on
the
same
device,
but
there
there's
there's
no
real
relationship
between
the
two
they're
together
because
they
were
again
under
the
theme
of
middlebox
cooperation,
which
is
a
pretty
weak
threat.
In
my
view,
so
there's
a
comment
in
the
chat
about
they
missed
the
cfrg
email.
That's
because
we
didn't
actually
go
directly
to
cfrg
upon
the
advice
of
the
security
ids.
E
We
contact
a
couple
people
directly
if
other
people
like
provide
a
crypto
review
of
what
of
specific,
specifically
section
5.2
the
stream
site
for
cid
like
that
would
be
very
welcome.
I
I
just
I
received
the
advice.
It
would
be
better
to
actually
contact
people
directly.
E
Any
matt
are
you
in
the
queue.
L
Or
you
just
I'm
mostly
hanging
out,
but
I
was
going
to
say
one
thing,
which
is
that
I
was
curious.
What
you're
in
from
a
chair
perspective?
What
what
is
your
opinion
on
this
necessity
for
interop,
for
you
know,
are
you?
L
E
So
I
I
have
like
two
concerns
with
the
surety
I'm
beyond
beyond
just
kind
of
my
ideal
path
and
how
this
proceeds
I
to
consider
my
maturity.
Number
one
is
that
the
document
is
written
in
a
way
that
it's
implementable,
because
obviously
I
knew
what
I
meant
when
I
implemented
it,
which
it
doesn't
mean
a
ton
and
then
the
second
thing-
and
I
hope
that
especially
google's
intentions
are
helpful
here,
like
actually
trying
to
deploy
this
thing
and
developing
some
like
we
have.
E
We
have
sort
of
a
configuration
model
that
has
some
assumptions
and
it'll
be
really
nice
to
like
test
that
against
actual
production.
Somehow,
so,
I'm
really
eager
to
hear
to
hear
like
what
the
what
the
issues
are
with
google
and
like
what
the
rough
edges
on
sort
of
the
assumptions
we
make
on
how
to
configure
things
and
like
what?
What
what
the
pain
points
are.
E
I
know
that
my
my
co-author,
nick
banks
has
like
been
working
with
azure
on
this
stuff
and
it
hasn't
gone
super
well
in
terms
of
getting
them
to
to
provide
good
feedback.
So
those
are
really.
Those
are
the
only
reason
other
than
just
like
the
clean
up
in
the
draft.
That's
those
are
the
reasons.
Those
are
the
two
things
where
I
just
don't
want
to
like
hit
the
wtlc
button
right
now,.
A
Thank
you,
martin.
That's
very
kind
of
you.
Okay,
so
mira
is
jumping
in
the
queue
it's
time
for
the
new
work
topic.
Samaria,
take
it
away
on
the
topic
of
multi-path
extension.
N
Okay,
hello,
everybody
so
yeah,
I'm
amelia,
I'm
presenting
a
new
draft
here
but
effectively.
This
is
not
new
work.
There
has
been
previously
three
different
drafts
and,
as
you
can
see
here
on
this
new
draft,
we
have
a
set
of
authors
that
actually
combines
the
authors
from
the
previous
draft.
So
that's
the
content.
N
So,
let's
move
to
the
next
I
have
to.
I
can
just
control
this
myself,
okay
yeah.
So
what
happened
so
far?
Lucas
already
mentioned
that
we
had
an
interview
meeting
about
a
year
ago
where
we
talked
about
quick
use
cases
mainly
requirements,
and
since
then
there
was
a
lot
of
work
and
people
have
been
working
on
implementations.
N
There
were
three
different
proposals
and
in
order
to
move
forward
from
here,
we
somehow
needed
to
agree
what
the
right
way
is
to
move
forward
like
combining
those
proposals
and
and
getting
one
way
out
of
this.
So
I
got
in
touch
with
all
the
authors
from
from
all
the
draft,
and
there
was
already
a
lot
of
agreement
about
what
to
do,
and
everybody
was
like
so
on
board
for
having
one
unified
solution
immediately.
N
So
we
had
a
site
meeting.
Only
a
few
weeks
ago,
where
we
discussed
how
to
move
forward
here
and
as
a
result
of
that,
we
recently
published
this
new
draft,
which
does
take
parts
of
the
the
other
three
drafts.
So
this
presentation
will
give
you
an
overview
about.
What's
what's
the
focus
in
the
new
draft
and
what's
in
the
new
draft-
and
this
is
the
most
important
outcome
from
all
the
discussion
we
had-
is
that
the
new
draft
is
really
focusing
on
the
core
components.
N
So,
in
the
other
three
drafts
there
were
always
parts
which
were
like
not
core
components
which
were
things
like
qe
handling,
address
discovery,
and
this
kind
of
things,
and
in
order
to
move
forward,
we
really
want
to
focus
on
those
things
that
are
needed
to
establish
multipath
usage
of
of
multiple
paths
simultaneously.
N
So
that
means
the
current
draft
as
a
core
component
has,
of
course,
the
negotiation
of
a
new
extension.
It
has
a
minimal
set
of
path,
management
that
is
needed
about
setting
up
inclusion,
close
new
path
or
old
pass.
In
that
case,
it
talks
a
little
bit
about
scheduling,
but
that's
really
minimal.
At
this
point,
we
probably
need
a
few
more
words,
but
don't
want
to
talk
about
it
extensively,
only
what's
needed
to
make
it
work,
and
then
it
talks
about
how
to
actually
transmit
and
retransmit
packets.
So
that's
it.
N
That's
the
core
everything
else
that
has
been
previously
discussed,
like
more
advanced
scheduling,
mechanisms
or
unidirectional
passes,
address
disc
discovery
or
any
kind
of
qs
handling,
and
these
are
things
that
could
come
up
later
on
in
additional
drafts
or
could
be
additional
extensions.
On
top
of
that.
N
So
the
other
thing
that
we
had
like
very
broad
agreement
on
is
that
one
of
the
important
design
principles
here
is
to
use
as
much
as
possible
from
rfc
9000,
and
this
is
also
kind
of
the
feedback.
I
not
only
got
from
the
authors
but
from
everybody
else.
I
talked
like.
Please
keep
this
as
simple
as
possible,
and
that
means
make
minimal
set
of
changes
here
and
we
actually
try
to
do
that.
So
what
we
don't
change
is
past
validation.
N
We
completely
keep
it
as
it
is
in
a
quick
version,
one
and
and
as
used
for
pass
migration,
we
don't
have
to
change
basically
anything
about
congestion
control,
because
congestion
flow
was
always
per
path.
So
when
you
had
migration,
you
had
to
reset
your
congestion
control,
and
now
you
have
to
run
multiple
congestion
controls
in
parallel
for
each
path.
We
didn't
change
the
header
format
and
also
didn't
change
anything
in
the
in
the
r0
rtt
packet.
So
everything
that
we
do
for
multipass
is
done
for
one
rtt
packets.
N
There
is
another
design
principle
which
is
kind
of
important
is
that
we
define
a
path
as
a
four
tuple.
That
may
makes
some
things
some
assumptions
easier,
and
it
mainly
means
that
you
can
only
have
one
path
before
chapel.
N
So
what
did
we
change
or
what?
What
did
we
add?
I
probably
have
to
say
what
we
add
is
that
we
kind
of
not
only
want
migration,
but
we
actually
want
simultaneous
use
of
multiple
path.
So
that
means
that
you
can
send
non-probing
frames
on
multiple
paths
at
the
same
time.
So
this
is
something
that
is
not
allowed
by
rsc
9000
and
that's
something
we
want
to
enable
here.
So
that's
one
change
and
the
other
change
is
that
we
now
that
you
have
like
multiple
open
paths.
N
You
also
have
to
care
about
clothing,
those
paths.
So
that's
some
additional
mechanics
that
we
added
here,
but
it's
also
not
a
lot
of
stuff.
N
Before
I
talk
more
about
packet
number
spaces,
which
is
like
one
of
the
open
points,
I
just
show
you
on
the
next
slide,
what
we
are
proposing
to
do
in
this
draft
right
now.
So
what
we
have
in
the
draft
right
now
is
that
you
can
actually
negotiate
either
use
of
one
packet
number
space
or
use
of
model
packer
number
space,
or
you
can
in
indicate
in
the
handshake
that
you
support
both.
N
So
what
does
it
mean?
One
packet
number
or
multiple
packet
number
easy.
So,
with
one
packet
number
space,
every
packet
you
want
to
send
out
gets
the
new
packet
number
in
order,
and
then
you
decide
to
send
it
out
on
one
or
the
other
paths,
so
you
might
send
out
packet
one
on
one
pass
and
pack
it
on
the
other
pass.
N
If
you
have
multiple
packet
number
spaces,
the
packet
numbers
are
independent
on
each
path
and
you
can
have
packet
one
on
pass
number
one
and
you
can
have
packet
one
on
pass
number
two,
but
you
have
to
identify
which
path
this
was
sent
on.
N
We
have
those
two
options
in
the
negotiation
and
in
the
specification
of
the
current
version
of
the
draft,
but
this
is
to
able
evaluation
and
implementation
experience,
and
this
is
not
is
there's
no
intention
to
publish
this
kind
of
negotiation
for
the
final
version
of
of
this
extension
in
any
rsc,
but
we
proposed
to
actually
take
it
on
in
a
working
group
like
this,
so
we
can
get
more
experience
here
and
make
progress
and
have
more
discussion
about
this.
N
So
on
this
slide,
I
try
to
on
a
high
level
summarize
the
current
discussion
about
packet
number
spaces
and
why
it's
not
easy
to
make
a
decision
right
away.
We
had
some
discussion
about
this
in
the
site.
Meeting.
There's
also
a
nice
slide
set
from
christian
hittimer
about
this.
If
you
want
to
learn
more-
but
you
know
the
high
level
message
on
this
slide
is
there
are
pros
and
cons
for
both
of
the
approaches.
N
So
I
think
I'm
starting
with
the
cons
for
some
single
number
packet
number
space
here,
so
the
cons
for
the
single
packet
number
space
is
actually
it
it's
a
kind
of
easy,
the
pro
sorry
that
it's
kind
of
easy
to
implement
it's
it's
only
a
few
lines
of
code
that
you
need
to
add
or
change
and
and
the
other
pros
also
that
it
does
work
with
zero
length
connection
id.
N
The
cons
are
that
you
can
come
into
a
situation
where
you
actually
increase
your
ag
frame
size
noticeably.
So
especially,
if
you
have
two
paths
which
have
very
different
latencies,
you
might
get
a
lot
of
reordering.
N
You
can
be
smart
about
how
you
schedule
the
packets
or
how
you
create
your
ex
and
try
to
reduce
that,
but
the
problem
or
the
con,
maybe
is
that
you
have
to
be
smart.
It's
not
straightforward.
You
have
to
do
it
right
and
maybe
there's
also
more
chance
for
doing
something
wrong
on
the
multiple
packet
number
spaces.
N
You
you
don't
have
that
problem,
so
that's
one
of
the
big
cons,
it's
very
clear
which
packet
belongs
to
which
path.
There's
no
ambiguity
about
you
know
where
the
packet
was
lost
or
about
when
you
calculate
the
round
trip
time
or
anything
like
that
and
your
pack,
your
egg
frames
are
smaller
and
clearer,
and
so
the
the
logic
that
you
have
to
implement
is
easier.
N
The
other
drawback
of
the
multiple
packet
number
space
is
that
currently
you
we
have
to
require
connection
ids
in
both
directions
to
make
this
work,
but
that
is
actually
something
that
is
under
discussion,
and
maybe
we
come
up
with
some
smart
ideas
here
as
well
so
but
effectively.
I
don't
think
I
want
to
discuss
this
topic
here
today.
I
think
what
we
need
and
what
I
said
earlier
is
we
need
more
implementation,
experience
and
feedback
about
this
and
then
hopefully
have
this
discussion
further
in
the
working
group.
N
I
was
trying
to
look
at
this
right
now-
let's,
let's
actually
break
here
and
take
those
questions.
Yeah.
A
If
you're,
okay
with
that,
mirror
I'd
like
to
keep
this
on,
for
clarifying
questions
for
for
what's
been
said
already,
we'll
have
time
for
discussion
at
the
end,
so
kazuho,
please
fire
away.
K
Thanks
for
the
presentation,
I
have
one
question
so
one
of
the
slides
says
that
a
path
is
defined
as
faultable
being
bi-directional.
Does
that
mean
that
if
you
receive
packets
on
path
x,
you
have
to
send
out
for
those
packets
on
pathx.
N
No
it
it.
It
means
that
you
have
to
be
able
to
receive
packets
on
that.
Pat
on
that
pass,
and-
and
that's
something
that
you
check
during
past
validation,
but
where
how
to
schedule
packets
and
how
to
schedule
their
your
x
is,
is,
I
think,
actually
not
specified
that
much
in
the
draft.
Currently
that's
left
to
the
implementation,
so
there's
no
requirement
to
set
send
the
x
on
the
same
path.
O
Can
you
hear
me
yes
yeah,
so
I
have
reading
you
know
the
emergency
act,
so
I
just
got
one
comment,
so
I
suggest
that
maybe
we
can
add
some
use
quiz
or
some
some
paragraph
in
the
introduction
to
describe
explicitly
so
how
do
we,
how
how
we
can
use
that
and
be
quick?
So
the
reason
is
so
now.
You
should
know
that
the
cpr
r18
has
has
the
release.
O
18
has
discussed
the
atss
for
a
long
time
and
several
companies
strong
support
to
add
the
multi-pass
functionalities
into
the
attackers.
To
achieve
you
know
the
traffic
splitting
or
even
redundant
transmissions.
So
I
think
so.
I
think
one
of
the
the
valuable
things
here
is
that
so
perhaps
we
can,
if
we
can
complete
a
and
a
quick
work
for
just
before
the
the
completion
of
the
release
18
and
then
we
can.
We
can
try
to
to
adopt
this
this
rfc
into
the
real
estate
team.
O
That's
maybe
it's
it's!
It's
it's
a
good
way
to
to
like
to
you
know
to
make
this
drive.
You
need
to
implement
into
the
in
the
real
world
case.
So
actually
I
stopped
in
a
use
case.
Draft
before
for
this
meeting,
so
I'm
not
sure
whether
I
can
share
my
screen
for
sometimes
or
I
can
just
just
simply
describe
that.
The
the
the
draft.
A
O
Okay,
so
I
just
I
I
okay,
let
me
simply
describe
so.
I
just
have
a
point.
N
I
I
agree
with
you.
I
think
this
is
a
very
important
use
case
and
I'm
supporting
that
as
well,
and
we
should
prob
maybe
have
some
more
discussion
in
future,
but
probably
not
in
this
meeting.
So
I
think
in
this
reading
we
should
figure
out
what
are
the
basic
concepts
that
we
want
to
see
in
a
draft
and
then
move
forward
and
and
have
this
discussion
later.
If
that's
okay,
for
you.
O
Okay,
so
so
perhaps
yeah
once
we
have
once
we've
upgraded
on
the
on
the
basic
setup,
quick
shot,
maybe
for
the
for
the
extension,
so
we
can
consider
to
add
some
introduction
and
some
descriptions
about
how
to
you
know
how
to
advertise
the
the
draft
into
a
different
organization.
N
B
Thanks
a
very
quick
clarifying
question
maria
you
said
that
the
decision
between
these
two
choices
would
require
more
implementation.
Experience
and
matt
said
on
chat
that
this
would
be
the
first
design
decision
that
we
would
make
if
this
draft
were
to
be
adopted.
Those
two
are
very
different
points
and
I
don't
need
it
right
now,
but
it'd
be
useful
to
clarify
what
exactly
the
path
forward
would
be,
because
I
would
like
to
push
against
something
which
I'm
not
gonna
do
right
now,
but
this
is
asking
for
clarification
for
what
that
would
be.
N
So,
if
you
ask
me,
I
think,
for
example,
people
deciding
to
only
decide
implement
one
or
the
other
you
know
is
good
input
for
this
group.
If
we
have
everybody
at
the
end
decides
to
only
implement
one
of
the
approaches,
then
we
have
a
decision.
We
don't
need
much
discussion
about
it,
but
I'm
also
pretty
sure
that,
depending
on
the
stack
you're
having
your
implementation,
experience
might
be
different.
N
So
more
input
would
be
valuable
here,
because,
based
on
the
implementations
we
have
so
far,
we
can
see
that
the
implementation
of
a
single
number
packet
space
is
easier,
but
we
also
didn't
evaluate
a
lot
of
the
logic
that
you
have
to
implement.
Then,
if
that
actually
works
very
well,
so
more
work
is
needed
from
my
point
of
view,
but
maybe
matt
wants
to
say
more.
L
Yeah,
what
I
was
going
to
say
is
that
you
know
it.
I
kind
of
agree
with
jonah
that
it's
a
design
issue,
but
you
know
it's
it.
I
think
what
you're
saying
is
actually
kind
of
the
same,
which
is
that
you
wait,
mario,
what
you're
saying
is
you
want
input
from
implementers
about
which
design
they
think
works
with
their
implementation,
not
necessarily
that
you
think
everyone
should
go,
implement
both
and
then
be
like?
Oh,
I
like
this
one
better.
P
Q
Sorry,
I
was
just
waiting
for
somebody
to
to
tell
me
if
we
were
we
were
finished.
I
wanted
to
thank
you
all
for
doing
this
work
a
lot
maria.
We
I've
been
talking
about
a
strategy
of
sending
acts
on
different
paths
which
you're
leading
leading
up
to
the
implementations
for
now,
and
I'm
not
arguing
about
that.
My
clarifying
question
was:
is
that
does
that
work
with
one
sequence,
space,
multiple
sequence,
spaces
both
or
neither.
N
That
should
is
completely
independent.
You
can
you
can.
Even
if
you
have
a
different
packet
number
space,
you
can
send
it
to
a.
H
Q
P
N
P
P
Okay,
okay,
so
you
don't
see
special
challenges
here
and
the
description
of
multi-pass
quick
is
completely
agnostic
to
that
okay.
Nevertheless,
my
question
is
specific
to
the
multiple
r2
to
the
packet
number
spaces.
N
P
Yeah,
okay,
yeah,
that's
the
same
in
dccp,
so
you
are
aware
of
a
packet.
That's
lost!
Nevertheless,
you
maybe
wanna
reorder
packets
without
re-transmission,
without
requiring
re-transmission,
and
then
multiple
pn
spaces
can
help
to
to
make
fast
decisions
in
the
reordering
process.
If
a
packet
get
lost
or
it's
worth
to
wait
for
it.
A
I
just
gotta
interject
here's
a
chat.
This
is
interesting
chat.
We
can.
We
can
have
this
on
the
list
and
on
issues
it's
it's
kind
of
beyond
the
scope
of
clarification.
So
thanks
marcus,
but
yeah,
we'll
we'll
follow
up
on
that
one
yep
we're
gonna.
If
you
want
to
progress
through
the
slides,
mirror.
N
Yes,
let's
go
through
many
few
slides
and
just
to
tell
you
what's
in
the
draft,
not
much
left
here
so,
as
I
said,
there
is
something
about
path,
management
path,
initiation.
It
works
very
similar
than
pass
validation.
If
you
take
pass
validation
as
it
is
from
rsc
9000
no
changes.
The
only
change
is
that
afterwards,
after
the
validation
has
completed,
you
can
actually
send
non-problem
packets
on
multiple
path.
Pass
validation
can
also
be
only
initiated
by
the
client.
N
So
that's
a
restriction
that
we
could
further
discuss
in
the
working
group,
but
that's
what's
currently
in
the
draft.
The
thing
that
we
did
add
was
pass
removal,
because,
with
migration,
you
don't
need
to
remove
the
path
you
move
over.
You
close
the
old
parts
immediately
with
multi-pass
support.
You
have
multiple
paths,
so
you
have
to
care
about
how
to
close
them.
N
So
basically,
what
we
do
here
is
we
we
from
from
a
semantics
point
of
view.
We
have
two
new
frame
types,
one
of
the
past
bend
frame.
This
carries
past
identifier,
error,
code
and
reason
frame,
and
there
is
also
a
pass
identifier
type,
which
indicates
if
the
pass
identifier
belongs
to
basically
the
source
or
destination
id
or
to
the
current
pass.
If
no
connection
id
is
present
so
effectively,
you
can
send
this
frame
on
any
path.
N
If
you
have
a
connection
id,
if
you
don't
have
a
connection,
you
can
only
send
it
over
the
past
that
you
want
to
abandon.
So,
if
you
have
a
connection
id,
you
have
like
more
flexibility
here
in
providing
this
signal.
To
the
other
hand,
then
we
also
have
a
new
egg
multi-pass
frame.
This
is
only
used
if,
if
you
have
multiple
packet
number
spaces,
but
this
extension
is
also
minimal.
It's
really
like
the
old
egg
frame.
It
only
has
one
additional
field
which
identifies
the
packet
number
space
yeah,
so
that's
kind
of.
N
What's
in
the
draft
again,
we
we
really
try
to
focus
on
the
core
components
that
are
needed
to
use
multiple
paths
simultaneously
and
all
the
authors
have
agreed
on
some
key
design
principles
which
is
really
try
to
keep
it
as
simple
as
possible.
Just
add
minimal,
stuff
and-
and
we
are
aligned
with
that.
I
think
that
was
also
what
came
out
of
the
site
meeting.
N
We
have
this
option
to
currently
to
negotiate
multiple
packet
numbers
space,
one
or
multiple
number
spaces,
but
this
is
really
for
experimentation,
and
so
please,
if
you
go,
implement
the
site,
if
you
want
to
implement
one
or
both
of
them
and
report
back,
so
we
can
make
a
decision
for
one
of
those
approaches
at
some
point.
But
I
think
this
is
ready
to
to
take
on
by
the
working
group
and
move
the
discussion
back
into
the
working
group
at
the
site
meeting.
There
was
a
lot
of
interest.
N
There
was
also
people
talking
about
implementation,
so
I
think
I
think
it
would
be
great
to
have
this
in
the
working
group,
and
so
we
are.
We
are
asking
if
this
is
ready
for
adoption.
A
There's
there's
a
lot
of
potential
things
that
people
could
say
about
cool
ideas,
they've
got
or
potential
issues
they
might
see,
but
we'd
like
to
keep
the
discussion
here
focused
on
on
the
question
about
you
know,
viewpoints
on
on
whether
we
think
this
this
brush
should
be
adopted
or
not
towards
the
end
of
the
the
eight
minutes
we
have
here,
we'll
take
a
show
of
hands
as
well
before
taking
any
any
question
to
the
list,
but
yeah.
A
If
people
could
keep
it
on
the
topic
of
adoption,
that
would
greatly
help
the
chairs.
So
john
is
first
in
the
queue.
Please
please
go.
B
Thank
you
for
the
presentation,
maria.
It
was
very
helpful
at
high
level.
I
would
say
that
the
document
that
lays
out
all
possible
designs,
I
would
argue,
is
not
ready
for
adoption
simply
because
it
it
it's
a
superset
of
all
the
things
that
you
might
want
to
do
now.
I
haven't
read
the
draft
yet
so
I'll
admit
that
first,
but
can.
N
I
interrupt
you
here
for
one
second,
so
we're
not
playing
out
all
possible
designs.
We
really
try
to
concentrate
on
the
minimum,
but
it
was-
and
there
was
a
lot
of
things
where
the
three
different
drafts
you
look
at
like
we
actually
found
a
lot
of
agreement
and
the
authors
are
all
aligned
on
these
points.
N
It's
really
just
a
packet
number
space
because,
as
you
have
seen
on
the
slide,
there's
no
clear
good
solution
right
so,
rather
than
trying
to
take
any
kind
of
choice
and
now
and
then
maybe
revise
it
later,
because
we're
figuring
out
this
is
not
the
right
thing
to
do
and
we
left
those
two
options
in
there.
Only
for
this
one
issue.
N
B
So
I
would
have
to
read
the
draft.
I
suppose
I
mean
I'll
take
that
back
to
myself.
That's
feedback
to
me.
However,
I'll
say
that
if
we
can
resolve
that
issue
before
we
talk
about
our
option,
that
would
be
helpful
because
to
me
the
the
the
ultimately
we
are
defining
a
protocol
here.
It's
not
rocket
science
and
the
protocol
is
about
the
details
and
the
use
of
path
id
or
not
the
use
of
a
separate
packet
numbers
or
not.
B
Actually
again
in
it
to
me,
it
should
make
a
difference
in
big
difference
in
how
the
protocol
is
designed.
So
I
would
argue
that
take
it
for
what
it's
worth,
but
I
haven't
read
the
draft,
so
I
won't
make
it
too
strongly.
I
would
say
that
my
point
is:
we
should
resolve
that
issue
before
we
really
talk
about
the
adopting
the
documents.
N
So
point
taking,
but
also,
I
think
what
we
have
right
now
is
that,
and
that
was
also
what
was
in
previous
drafts
already,
that
we
have
two
specifications
and
they
can
work
together.
We
have
identified
what
are
the
things
that
overlap
and
have
described
them
in
in
the
respective
way.
So
if
you
make
a
decision
for
one
or
the
other
at
some
point,
we
just
go
ahead
and
remove
parts
of
the
draft,
and
that's
it
it's
not.
It's
not
like.
We
have
to
redesign
the
rest
because
we
separated
those
things
nicely.
N
So
I
think
this
is
really
a
a
small
part
only
and
rather
than
trying
to
to
kind
of
get
ourselves
stuck
on
this
kind
of
minor
implementation
point.
I
think
we
should
move
forward
and
we
should
make
sure
that
everything
is
is
in
well
shape
and
incentivize
people
to
implement
this
and
and
give
a
clear
thing
that
we
actually
want
to
work
on
this
and
come
to
a
conclusion.
A
Just
based
on
some
of
the
chat
and
that
that
we're
having
and
jonah's
point
there
about
not
having
read
the
draft,
I'm
just
going
to
take
a
quick
show
of
hands
for
who
who
has
actually
read
this
already.
So
we're
going
to
use
the
echo
tool
for
this
and
the
question
is
going
to
be
quite
simple.
Have
you
read
the
draft
and
you
can
raise
your
hand
to
say
you
have
and
I'm
going
to
start
that
now
and
we'll
run
that.
A
Okay,
I'm
gonna,
I'm
gonna
close
the
poll.
Now
we
have
20
raised
hands
and
46
did
not
raise
their
hands
for
a
total
of
66
participants
in
a
meeting
of
168
people.
So
thank
you.
K
Regarding
design
support,
is
there
a
design
principle
regarding
the
efficiency
of
the
protocol?
To
give
an
example
is
then
principle
that
multiple
quotes
should
perform
as
well
as
quickbrilliant?
I
might
have
missed
that,
but
would
hope
that
that
be
clarified,
because
that's
what
drives
the
design
choices
that
we
make.
N
So
I
don't
think
we
have
this
explicitly
in
the
draft.
The
design
principles
that
we
do
list
in
the
draft
is
what
kind
of
what
did
lead
the
the
author
team
to
make
the
decisions
right.
Now,
it's
not
like
a
comprehensive
list.
So
if
you
want
make
sure
this
is
captured
in
the
draft,
then
please
open
an
issue
yeah.
So
I.
K
N
Mean
I
think
this
needs
more
discussion,
because
I
don't
even
really
understand
what
such
a
requirement
would
mean
because
of
course
you
have
different
path
characteristics
if
you
have
multiple
paths
right,
so
you
might
have
one
long
delay
and
one
short
delay
path
and
if
you
select
the
short
delay
pass
for
a
certain
application,
you
have
better
experience
than
the
longer
past,
but
it
might
be
different
for
the
other
applications.
So
I
don't
think
this
question
is
actually
that
easy,
I'm
happy
to
discuss
it
more
right,
but
it's
a
fundamental
different
behavior.
K
Well,
so
what
was
thinking
about
a
of
a
strictly
inferior
case
like,
for
example,
for
fc
and
also
recovery?
We
might
need
purpose,
signal
back
delays,
but
that's
hard
to
accomplish
with
single
packet
number
space.
N
E
E
Thanks,
yes,
I've
read
the
draft.
I
support
this
work
export
adoption,
although
I'll,
probably
personally,
not
implement
it.
I
don't
think
we
need
to
resolve
the
pack
number
space
issue
prior
to
adoption.
I
I
will
say
the
reason
I
support
is,
I
think,
there's
a
lot
of
desire
for
multi-pass
solution,
but
there
are
a
lot
of
big
experimental
questions
and
I
think
of
all
the
transports
we
have.
I
think
quick
is
the
best
suited
to
do
open
internet
experience
because
I'll
actually
get
through
the
internet.
E
Unlike
all
the
other
options,
I
would
probably
argue
should
be
experimental
draft.
We
don't
have
to
settle
that
now.
Thanks.
E
R
All
right,
so
I
I
have
read
the
unified
draft
and
I
definitely
support
adoption
of
it.
I
think
that
the
work
to
unify
the
draft
has
greatly
improved
the
proposal
as
it's
come
together.
This
is
better
than
any
of
the
individual
drafts.
I
think
it
doesn't
have
a
lot
of
the
excess
bits
hanging
off
of
it.
It's
focused
and
it's
minimized
and
I
think
a
minimal
implementation
is
exactly
the
type
of
thing
that
we
want
to
adopt
here,
because
otherwise
we're
going
to
spend
a
lot
of
time
getting
distracted.
R
R
Our
implementation
really
needs
to
start
looking
at
this
soon
so
having
something
that
the
working
group
is
looking
at
is
good.
There
is
an
open
question
about
packing
number
spaces.
I
think
that
should
be
better
addressed
by
the
working
group
rather
than
having
the
authors.
Try
to
come
up
with
one
thing
and
not
listen
to
the
whole
working
group.
So
let's
adopt
it
and
figure
out
that
question
together.
C
Yeah
I
I
support
the
last
two
two
statements.
In
our
case,
we
finally
finished
like
connection
migration,
so
I
think
we
finally
feel
like
we
have
like
the
bandwidth
to
really
take
some
of
this
work
on
and
actually
contribute,
whereas
before
we
didn't
so
I
feel
like
now
is
the
right
time.
I'd
love
to
get
the
pack
number
space
issue
result,
but
I
don't
think
it
needs
to
be
resolved
before
adoption,
but
I
also
don't
want
to
make
everyone
implement.
E
C
But
I
think
we
can
hammer
out
like
that
pretty
quickly
in
the
networking
group,
the
only
other
commenter
that
say
is.
If
we
do
this,
we
may
want
to
consider
having
like
a
separate
one-hour
meeting
just
for
multipath
or
something
real
quick,
because
I
get
the
impression
there
are
some
people
who
attend
the
quick
working
group
who
are
distinctly
uninterested
in
spending
their
time
on
us.
But
I
don't
know
you'd
have
to
ask
through
about
that.
F
I
I
definitely
support
adoption-
that's
not
surprising,
but
the
big
reason
I
support
adoption
is
exactly
what
tommy
said.
We
want
the
discussion
to
happen
in
public
to
the
working
group,
not
in
a
confined
mailing
list,
that
is
between
authors
and
if
we
support
adoption,
if
we
get
adoption,
what
will
happen
is
that
we'll
move
the
current
github
repo
in
which
we're
discussing
will
move
that
to
the
working
group,
people
who
already
sees
it
and
will
have
change,
control
and
things
like
that
and
I
think
making
the
discussions.
B
So
I
made
the
point
about
not
adopting
this,
but
I
think
that
I
I
perhaps
the
spirit
of
the
question
to
me
now
is
whether
we
should
work
on
multipath
and
if
this
is
a
document
we
can
start
from
and
to
that
my
answer
would
be
yes,
because
there's
only
one
document
and
I
think
that
the
work
on
multipath,
I
think,
is
it's
a
good
time
to
start
working
on
it
as
a
working
group.
B
So
I
would
like
to
change
my
position
that
I
said
earlier
to:
yes,
let's
adopt
it
and
let's
hash
out
these
issues.
L
Okay,
and
with
that,
I
think
lucas
is
gonna.
We're
gonna
run
two
polls,
so
lucas
you
want
to
describe
the
polls
before
you
run
them.
A
Yeah,
I
just
want
to
thank
people
for
their
kind
of
feedback
during
this
discussion
and
the
comments
as
well.
You
know
this
helps
the
chairs
gauge,
get
your
opinion
on
on
this
matter.
So
to
that
end,
I
just
want
to
run
a
pole
to
see
you
know
gauge
interest
in
in
who
thinks
that
the
working
group
should
work
on
on
any
form
of
the
multi-path
problem
and
a
multi-path
extension
to
quick
to
resolve
that
problem.
So
I'm
going
to
start
that
poll
now.
A
A
And
we're
getting
a
bit
short
time,
so
I'm
going
to
close
this
poll
soon.
Some
results
already
coming
in
that
look
pretty
strongly
in
favor
of
that.
The
working
group
should
work
on
this.
The
results
are
in
the
poll
tool
at
the
end,
but
we
have
52
raised
hands
and
three
raised
hands.
Let's
end
the
session
there
it
it
for
anyone
that
that
said
that
they
don't
believe
we
should
work
on
this.
Do
they
want
to
make
a
comment
at
the
microphone.
A
No
okay,
so
in
that
case,
we'll
do
another
poll
about
adopting
this
specific
draft
as
the
solution
to
this.
A
A
Again,
raise
your
hand
if
you
agree
that
this
draft
should
be
adopted
and
do
not
raise
hand
if
you
disagree.
A
A
I
I
don't
see
anyone,
okay,
cool,
we'll,
we'll
take
this
away's
chairs
and
speak
to
some
people
on
back
channel
and
and
do
our
thing
so
stay
tuned
on
the
list.
We've
got
martin
duke
now
we
we've
changed
the
agenda
mid-session
based
on
some
feedback
and
the
version
negotiation
chat
that
was
in
david's,
slides,
so
well,
it's
going
to
talk
about
quick,
v2
or
whatever
we
might
want
to
call
it.
E
Hello
again,
everyone
thanks
to
lucas
from
what
was
maybe
a
typo
typing.
This
t-o-o
I've
decided
to
like
really
embrace
that,
because
there's
a
bike
shed
about
what
the
version
number
should
be,
whether
it's
actually
the
numeral
two
or
not.
So
I'll,
be
very
brief.
Why
is
there
v2
draft
number
one?
E
When
we
talk
about
greasing
the
version
number
people
said
we
should
roll
up
with
v2
as
quickly
as
we
can,
and
this
is
that
number
two
to
exercise
versus
negotiation
framework
as
david
discussed
and
then,
if
there's
any
fixes,
we
need
to
make
some
emergency
patches
to
quick.
This
could
be
a
vehicle
for
that.
It's
a
very
simple
short
and
simple
draft.
It
is
exactly
the
same
as
v1,
except
for
the
version
number
and
the
salt
and
the
hkdf
labels
and
sort
of
things
you're
supposed
to
roll
for
every
version.
E
There
are
a
couple
of
bike
sheds
about
a
lpn
and
what
the
version
number
should
be,
though
those
were
discussed
pretty
thoroughly
in
111
and
nothing's
happened
since
then.
So
I
invite
I
direct
you
to
that
that
the
records
of
that
discussion,
if
you
are
interested
in
those
in
those
questions,
so
I
guess
you
know-
I
I
think
it's
about
as
radio
dots
as
it
ever
will
be.
If
people
are
want
to
implement
this
and
and
and
take
it
to
standard,
we
can
do
that.
E
S
A
clarifying
question:
is
there
any
reason
that
we'd
want
to
include
those
couple
of
bike
sheds,
because
it
seems
like
that
it's
kind
of
a
no-brainer
to
do
the
first
bullet
point,
which
is
basically
just
let's
rev
the
version
number.
Are
we
trying
to
open
up
the
scope
to
a
bunch
more
than
that,
or
is
it
still.
E
In
my
view,
these
are
fundamental
questions
about
what
happens
when
you
roll
a
new
version
that
we
didn't
really
decisively
resolve.
In
my
view,
in
shipping
9000
again
I
I
mean
I
don't
want
to
spend
a
ton
of
time
on
it.
I
just
refer
you
to
the
the
the
question.
Just
how
do
we
number
things
and
and
and
what
is
the
status
of
aopn
for
new
versions?
H
And
there
we
go
thanks
for
the
presentation,
martin,
I
think
it's
not
a
given
that
we
want
to
ship
a
quick
version
to
with
minor
changes,
because,
like
that'll
break
a
lot
of
things,
so,
for
example,
like
sure
adding
quick
v2
support
to
my
code
base
is
trivial.
H
Like
you
know,
if
we
assume
it's
a,
you
know
version
that
just
has
different
salts
and
whatever,
but
if
I
do
that,
then
all
of
our
tests
will
have
to
run
with
another
version,
and
it's
like,
like
the
carbon
impact
of
that,
is
like
more
than
all
around
the
world,
which
is
just
terrifying
when
I
think
about
it.
But
more
importantly,
if
we
deploy
this
on
our
servers,
it
means
that
now
we
need
to
support
these
things
for
longer.
We
need
more
versions
than
alt
service
and
we
can
end
up
with
like
compatibility
problems.
H
Take
for
example,
we
still
have
middle
boxes
out
there
that,
like
to
extract
the
s
and
I
of
quick
packets,
they
will
break
some
people
will
tell
you
that's
a
feature.
Some
people
will
tell
me
to
roll
back
my
change
because
they
consider
it
a
bug.
H
So
there
is
a
what
I'm
trying
to
say
is
there
is
a
non-zero
cost
to
deploying
a
quick
version
to,
and
therefore
is
it
the
right
move
to
do
a
quick
version,
two
whose
sole
purpose
is
to
be
version
two
or
do
we
wanna
take
up
some
more
work
and
fix
some
things
that
we
want
to
fix
as
part
of
that
effort,
but
then
we
open
ourselves
to
like
second
system
syndrome,
so
just
wanted
to
put
that
option
out
there.
It's
not
free
to
do
to
do
a
version
with
a
few
changes.
E
Thanks
david,
I
think
that's
a
good
point.
I
mean
I
would
say
that
greasing.
I
mean
that's
kind
of
the
nature
of
greasing
right
is
that
you're
deliberately
breaking
little
boxes
that
are
improperly
making
improper
assumptions
about
the
the
packet
format,
but
fair
enough?
Yes,
absolutely
there's
a
risk
of
breakage.
If
people
adopt
this.
H
Yeah
I
mean
in
this
specific
case.
I
I
don't
think
if
the
middle
box
is
properly
parsing,
quickview
one
packets
and
is
making
the
assumption
that
it
can
only
speak
the
version
that
was
published
rfc
when
they
were
code,
they're
not
technically
doing
anything
wrong,
but
but
yeah
thanks
becker.
T
Okay,
thanks
for
thinking
this
through,
I
guess
I'm
softly
in
favor
of
this,
the
I
think
it'd
be
good.
I
I
I
know
I
might
as
sort
of
shape
the
little
box
thing
as
others
might
be,
but
I
think
it
has
some
value
I
think
increasing.
I
think
that
getting
some
some
experience
in
the
machinery
of
negotiation
would
be
helpful
since
we're
actually
going
to
specify
it.
I
I
would
be
against
making
any
technical
changes
at
all,
really
just
on
pragmatic
grounds.
You
know
we
like
opened
up.
T
T
So
I
think,
like
you
know,
like
you
know,
we
get
to
publish
this
tomorrow,
more
or
less
as
it
is,
but
if
we
like
you
know,
but
anything
of
my
traditional
clarifications,
just
like
you
know
like
once,
you
open
the
patient,
you
just
start
to
be
like
it
starts
to
be
like
really
hard
hard
to
stop
right.
E
T
Yeah-
and
I
think,
like
you
know,
I
think
I
I
think
we
should
put
like.
I
think
what
I
would
do
and
I
still
failed
to
do
this
with
846
is
like
put
like
a
two
month
limit
on
like
call
for
like
things,
and
they
have
to
be
like
really
clear,
defense
of
some
kind
or
another
and
then
we'll
just
like
fix
them
and
go.
T
T
I
think
that
that,
like,
let
me
put
it
this
way,
you're
like
it
should
absolutely
be
the
case
that,
like
modulo
these
constants
your
code
in
the
wire
should
not
change
and
if
color
wires
change,
then
we
like
them
like
they're,
like
we're,
probably
like
that's
like
bad
news
right
yeah,
so
that'd
be
by
my
point.
D
I
need
to
preempt
this,
so
I'm
also
sort
of
soft
on
this.
I
am
maybe
a
little
more
supportive
than
echo
support,
because
I
don't
see
us
shipping
the
version
negotiation
stuff
without
something
like
this,
and
so
I
think
we
should
do
this.
Your
discussion
previously,
though,
made
me
think
that
you
would,
you
were
doing
like
a
biz
document,
but
it
doesn't
seem
to
be
like
that
in
its
current
form,
and
I
prefer
its
current
form.
I
think
what
you've
done
is
is
almost
perfect
for
it.
U
Hello
yeah,
so
everything
martin
said
and
everything
ecker
said.
I
would
add
that
I
would
actually
go
a
little
further
than
that
and
request
that
the
chairs
schedule,
a
working
group
last
call
at
adoption
time
just
to
basically
say
there
is
an
actual
deadline
to
keep
that
to
keep
the
the
temptation
to
stuff
things
into
your
down.
I
I
like
agree
a
bit
with
david's
point.
We
should
consider
the
cost
of
doing
this.
E
Thank
you
yeah,
like
I
I'm
very
intrigued,
I'm
I
I'm
very
sympathetic
to
the
sentiment
to
just
like
ship
it
immediately.
I
do
think
we
have
to
work
out
this
aopn
issue
and
just
decide
what
we're.
U
U
R
Yeah
I
wanna
I'll
echo,
barton
and
brian.
I
I
think
this
is
good
to
adopt.
I
think
it's
good
to
do
and
the
reason
I
came
into
the
queue
just
kind
of
answer
to
david's
point
about
it
being
expensive.
I
don't
think
we
need
to
see
everyone
do
this
like.
We
just
need
enough
greasing
that
the
internet
sees
the
new
version
going
by
right
and
there
are.
There
are
places
where
we
could
do
it
like.
R
We
have
different
stacks
like
we
don't
have
like
the
stack
that
youtube
uses
doesn't
have
to
use
this
right
now.
You
know
we're
using
quick
for
a
ton
of
traffic
with
our
private
relay
stuff
on
apple
devices.
We
could
you
know
like
we
don't
care
about
sni
there.
We
don't
care
about
any
of
this
stuff.
A
Well,
thank
you.
Thank
you
for
the
discussion.
I
think
we'll
go
away
based
on
that
feedback,
probably
send
an
adoption
call
to
the
list
so
encourage
people
to
look
out
for
that
and
provide
further
feedback
there.
That
gives
us
some
time
for
our
as
time
permits
bucket
we'd
like
to
go
with
the
first
of
those
items
which
is
ian
to
talk
about
accuracy,
timestamps
or
is
connor
presenting.
C
I
mean
it's
going
to
be
conor
m,
feel
free
to
ask
questions
of
me
as
well,
but
he's
kind
of
the
expert
on
this
in
a
lot
of
ways.
Let.
C
V
So
my
name
is
connor.
I
formerly
worked
at
google
developing
congestion
control
algorithms
for
real-time
interactive
streaming.
Applications
such
as
cloud
gaming
in
the
form
of
google
stadia
and
presenting
this
work
in
collaboration
with
ian
and
we're
proposing
a
quick
extension
for
reporting.
V
To
estimate
at
that
network
queue
state
and
my
team
at
google
also
recently
submitted
a
paper
which
we
hope
to
get
accepted
and
published
and
share
with
you
all
detailing
how
we
use
receive
timestamps.
For
these
kind
of
latency
critical,
real-time
media
applications
in
in
production,
so
let
me
skip
ahead.
V
At
a
high
level,
we're
suggesting
a
new
accuracy,
timestamp
frame
type
to
be
sent
in
the
same
manner
as
a
normal
act
frame
in
the
place
of
a
normal
act
frame,
and
it
has
all
the
fields
of
the
normal
arc
frame,
but
also
includes
additional
fields
to
encode,
receive
timestamp
information
in
a
efficient
variant
based
format
which
we'd
love,
we'd,
love
feedback
on
we've
defined
two
transport
parameters
to
negotiate
the
the
use
of
this
frame.
The
first
specifies
the
maximum
number
of
receipts
stamps.
V
V
F
Yeah
I
mean
I
I
I
look
at
that
design
and
it's
it's
kind
of
a
logical
design.
If
you
want
to
do
fine
grain
reporting
that
you
have
to
include
that
in
a
knack
format,
now
the
the
reason
the
current
extension
doesn't
do
that,
because
my
puzzle
doesn't
do.
That
is
because
I
got
the
feedback
in
the
working
group
at
the
time
that
we
wanted
extension
to
be
composable.
F
That
is
that
we
wanted
the
extension
to
be
outside
of
the
arc
format,
for
example,
so
that
you
people
that
use
the
arc
format
can
go
on
using
it.
But
if
the
extension
is
present,
they
can
compose
the
extension
with
the
arc
format
and-
and
that's
that's
one
of
the
subtle
difference
that
you
mentioned
and
whether
it
should
be
basically,
let's
change
the
arc
frame
to
have
an
ac
plus
timestamp
frame
or
whether
we
should
have
a
timestamp
extension
and
parallel
to
the
arc
extension
the
arc
frame.
C
Yeah
thanks
for
bringing
that
up
christian,
so
we
kind
of
talked
about
this
a
long
time
ago.
There
are
definite
advantages
to
including
it
in
the
frame,
which
is
to
say
that
the
congestion
controller
gets
to
consume
all
the
acknowledgement
information
simultaneously,
which
kind
of
just
makes
it
easier
to
write
a
congestion
controller
but
but
you're
right
that
there
may
be
use
cases
for
which
this
is
less
suitable,
because
it
is
not
composable,
and
so
it
might
depend
on
the
use
case.
I
don't
know.
C
C
G
Thank
you.
I
wanted
to
ask
about
the
just
a
clarification
question.
So
if
there's
an
ack
timestamp
frame,
is
it
going
to
follow
the
same
frequency
as
the
act,
frequency
frame
or
probably
less
than
that
and
when
there's
you
know
compressed
axe
when
you
are
acting,
maybe
once
per
rtt
that
probably
won't
work
for
real-time
transport,
so
yeah
just
something
to
consider.
I
guess
yeah
right.
C
Yeah
so
clearly
the
the
real-time
congestion
controller
would
have
to
decide
what
frequency
of
acknowledgement
it
needed.
If
you
used
active
frequency,
and
so
I
assume
that
it
would
want
to
use
a
value.
That's
you
know
more
canonically
typical,
like
a
quarter
of
an
rtt,
half
an
rtt
or
less.
But
yes,
and-
and
I
don't
know
I-
I
guess
I
I
feel
like
this-
is
a
tool
that
a
congestion
controller
can
use
to
get
more
information.
But
it's
you
know.
C
What
packet
loss
and
other
signals
are
used,
but
they're,
I
think,
they're
not
used
for
bandwidth
estimation
per
se.
Conor
might
have
more
like
information,
but
I
think,
probably
probably
that's
a
this
was
sent
out
to
the
list.
Would
you
mind
posting
there,
since
we
have
zero
minutes
left.
K
B
Thanks
so
two
quick
points.
First,
I
would
strongly
argue
for
composing
the
reason
being
that
the
number
of
times
that
you
want
to
repeat
this
information
is
different
for
an
acknowledgement
versus
for
a
time
stamp.
So
a
timestamp
is
advisory
typically,
and
you
just
need
to
send
it
once,
even
if
the
peer
does
not
receive
it,
it's
fine,
but
with
an
acknowledgement
information
you
may
want
to
keep
repeating
the
information,
so
there's
a
lot
of
repetition.
B
That
happens
there,
which
you
can
avoid
entirely
when
you're
doing
time
stamps
so
having
a
separate
frame
that
conveys
that
information
would
allow
you
to
do
that,
whereas
it'll
be
tricky
if
you're
trying
to
put
that
into
the
back
frame
itself.
B
Second,
the
the
reason
that
arc
ranges
exist
in
the
act
frame
is
because
we
can
express
a
list
of
acts
as
a
range.
It
doesn't
make
sense
to
me
to
express
a
timestamp
range,
I'm
not
quite
sure
what
that
means,
and
what
that
really
is.
So
I
would,
I
would
have
expected
to
see
a
timestamp
per
packet
that
is
being
acknowledged,
because
that
is
really
what
wants
to
receive.
C
Yeah,
that's
what
the
format
actually
does.
It's
just
that
the
range
is
used
because
typically
packets
aren't
received
in
order,
and
there
are
not
a
huge
number
of
gaps
and
the
range
it
allows
you
to
express
ranges
of
timestamps
more
efficiently.
B
C
B
C
A
Yeah,
so
so,
thanks
for
your
time,
we
need
to
wrap
this
up
now.
Let's
keep
it
short
and
sweet
I'd
like
to
thank
everyone
for
this
meeting
and
behaving
well.
Thank
you
to
our
scribe
and
watson
and
our
notetakers
jonathan
and
robin
nico
apologies.
We
can
get
to
you
as
time
permits
we'll
follow
that
up
on
that
off
off
list.
We
have
some
chair
actions.
We
need
to
take
we'll
speak
to
the
ads
and
various
people
in
back
channel
so
keep
your
eyes
on
the
mailing
list.