►
From YouTube: IETF114 QUIC 20220728 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
B
If
that's
not,
where
you
expect
to
be
either
physically
or
digitally,
then
I
would
hurry
on
over
to
where
you're
supposed
to
be.
B
So
some
things
you
know
it's
thursday
now
so
everyone's
probably
used
to
this,
but
I
remember
the
importance
of
the
ietf
note.
Well,
this
is
a
copy
of
it.
I'm
sure
you've
seen
it
if
you
need
to
find
references
to
it.
It
is
very
easy
to
find
both
on.
B
So
for
some
itf
meeting
tips-
in-person
participants-
you
probably
used
this
by
now,
but
remember
to
use
meat
echo
light
when
you
were
in
the
room.
This
is
useful
for
us
chairs
to
keep
track
of
the
cue
with
both
remote
and
in-person
participants.
B
And
also,
we
want
to
stress
that
those
there
in
person
should
be
wearing
a
compliant
mask
at
all
times.
There
are
n95
masks
available
at
registration.
Please
remember
to
keep
your
mask
on
and
remote
participants.
I
mean
the
normal
applies
for
making
sure
your
audio
and
video
are
off
unless
you're
you
know
presenting,
and
we
do
recommend
using
a
headset
and
or
checking
making
sure
your
levels
are
good
on
your
mics.
B
So
in
minnesota
we
have
our
tireless
volunteer
and
robin
marks.
Who
is
who
is
volunteered
to
take
notes,
and
we
thank
robin
yet
again,
the
blue
sheets.
This
is
again
being
handled
through
mid
echo.
Remote
participants
doesn't
have
to
do
anything.
Those
there
again,
please
use
media
collite
for
chat.
We
are,
of
course
using
mudeko
and
zulip
so
and
for
the
cue,
the
chairs,
despite
our
and
arguably
larger
than
life
presence,
we
will
be
running
the
queue
with
the
assistance
of
zahed.
Who
is
you
can
see
up
there
in
person?
B
E
In
quickly
there
sorry,
we
did
have
brian
volunteered
to
take
notes
as
well.
That's
great,
the
more
the
merrier.
If
somebody
in
the
room
would
just
like
to
assist
the
remote
notetakers.
That
would
be
greatly
appreciated,
so
do
dive
into
the
notes
and
contribute
where
possible.
I
don't
think
we
had
anyone
volunteer
for
javascribe,
though,
would.
E
Like
that
job,
just
in
case,
we
need
to
augment
anything
to
the
bike.
One
of
us
could
probably
do
it
too,
but
shall.
E
Jonathan,
that
would
be
very
helpful.
Thank
you
all
right.
B
Thank
you
all
who
are
helping
us
run
this
meeting,
all
right
so
agenda
which
can
be
bashed,
so
we
have
chair
updates
as
usual,
which
you'll
have
to
suffer
through,
and
then
we
have
three
working
group
items.
The
first
and
longest
is
multipath,
followed
by
the
acknowledgement,
frequency
work
and
then
quick
lb.
B
After
that
we
have
other
items
including
quick
time,
stamps,
multicast,
quick
and
and
talking
about
quick
announcement,
tooling
martin.
Would
you
like
to
say
something.
G
Yes,
martin
duke
google
kind
of
an
agenda
bash
but
like
rather
than
actually
introduce
another
presentation.
I
would
just
like
to
briefly
say
that
I
think
most
people
are
aware
that
lucas
and
I
have
a
draft
to
put
quick
versions
in
alt
service
and
serve
dns
service
b
records
as
we
just
as
we
dissociated
alpn
from
quick
version
in
http
bis.
Today
they
are
likely
to
like
dramatically
strip
down
alt-service
and
not
have
parameters
and
become
service
b
would
be
the
like
transport
capability
discovery
mechanism.
G
The
upshot
of
that
for
this
working
group
is
that
we
will
likely
accordingly
strip
down
our
draft
to
just
service
b
and
the
place
where
service
b
stuff
lives
is
in
the
protocol
working
group
where
it
applies,
which
will
likely
be
bringing
this
draft
too
quick
sometime
between
sometime
in
the
next
month
or
two.
So
look
for
that
and
we'll
probably
call
for
adoption
relatively
early
since
it's
a
simple
mechanism
thanks.
B
Thank
you
for
that
martin.
Is
there
anyone
else
who
would
like
to
agenda
bash
or
otherwise
have
a
note
of
order
here
and
if
not,
I
will
move.
D
B
Okay,
mary
is
not
ready
to
go
okay
update
since
last
meeting,
as
probably
everyone
hopefully
noticed,
and
if
you
didn't.
I
have
great
news
for
you,
which
is
that
http
3
became
an
rfc
at
long
last.
So
the
and,
of
course
another
document
which
is
q-pac
related,
of
course,
is
yes,.
B
This
is
obviously
a
big
milestone.
They
lagged
considerably
behind
the
other
quick
documents
for
reasons
that
are
not
really
worth
going
into,
but
we
no
longer
have
those
on
our
plate
and
they
are
rfc's
where
they
can
live
in
perpetuity
so
and
then
other
updates.
The
quick
grease
bit,
which
is
a
very
simple,
a
relatively
simple
document
but
important
one
is
currently
in
auth
48
and
version
negotiation
and
v2,
which
are
sort
of
proceeding
together,
as
as
was
agreed,
are
currently
under
ad
evaluation.
B
So
other
things
we
had
issued
an
adoption
call
and
it
completed
for
the
quick,
careful
resume.
This
got
some
feedback
that,
in
about
its
potential
suitability
for
other
potential
working
groups,
specifically
the
discussions
around
congestion
control's
life
at
the
itf,
so
the
chairs
are
going
to
take
this
under
consideration.
B
B
We
also
like
to
call
out
that
l4s,
which
you
may
have
heard
about
at
this
itf
work,
was
done
during
the
hackathon,
including
interop,
that
with
quick
stacks,
which
is
very
exciting
because
quick
is,
you
know,
participating
and
working
with
l4s
with
the
l4s
work,
and
I
think
that
is
it
lucas.
Do
you
have
anything
else
that
you
would
like
to
add
to
our
boring
chair
updates.
B
E
Just
that
you
know
matt,
and
I
were
both
really
excited
to
be
there
in
person
this
time
around
to
actually
meet
each
other
for
the
first
time
in,
however
many
long
and
meet
all
the
participants
as
well.
Unfortunately,
we
can't
be
there
for
various
reasons.
So
thank
you
very
much
for
your
patience
and
support
to
people
who
have
been
helping
behind
the
scenes
to
make
sure
this
session
can
run
smoothly.
F
C
Try
it
again:
no
there
we
go.
C
Okay,
hello,
everybody.
My
name
is
mia
coolerin,
I'm
one
of
the
editors
of
the
multi-pass
extension
for
quick,
but
probably
the
least
important
editor,
but
I'm
the
only
one
here,
unfortunately
yeah,
so
we
published
a
new
version,
and
this
was
actually
quite
a
big
update.
Based
on
the
discussion
we
had
last
time.
The
biggest
change
is
that
we
merged
the
pr
which
which
has
the
what
we
call
unified
proposal.
C
We
could
discuss
this
last
time
and
I
have
a
slide
later
on
this
and
there
were
some
other.
A
few
changes
that
I
want
to
update
you
on
one.
Is
that
also
an
issue?
We
discussed
last
time
that
we
now
decided
to
have
some
mechanism
to
also
signal
that
there
might
be
or
that
you
want
to
use
one
of
the
pass
only
for
standby
motors,
and
we
have
a
new
frame
for
that.
There's
some
clarifications
about
pass
closure
or
like
denying
a
path
which
is
also
actually
kind
of
new
protocol
work.
C
Yeah
we
didn't
solve,
or
we
didn't
close,
all
the
open
issues.
We
still
have
three
of
the
open
issues
that
we
discussed
last
time
time,
and
so
these
are
need
a
little
bit
more
work,
especially
the
first
one.
It's
on
my
plate
to
look
at
what
we
actually
decided
or
like
why
we
didn't
allow
servers
to
migrate
and
if
that's
still
important
for
multiple,
so
I
will
do
some
research
and
come
back
to
that.
Just
didn't!
C
If
this
is
something
we
want
to
do
in
the
space
draft
or
if
this
is
an
extension
in
the
future
and
that's
ongoing
that
discussion,
we
have
also
a
few
new
issues,
but
I
have
to
say
these
are
a
little
bit
more
on
the
editorial
side.
I
think
so
like
not
fully,
but,
as
I
just
said,
there
is
more
work
needed
about
explaining
how
to
calculate
act.
Delay
correctly,
and
maybe
some
of
this
is
kind
of
normative,
but
it's
not
clearly.
So
that's
ongoing.
C
Yes,
so
especially
you
know.
What's
I
wanted
to
explain
the
problem
quickly
here,
but
not
sure
if
we
will
solve
it
completely
or
finalize
the
discussion
here,
so
the
problem
we
have
is,
if
you
and
there's
still
a
part
about
using
single
packet
number
space.
So
if
you
use
single
packet
number
space,
then
things
like
calculating
your
round
trip
time
and
delay
correctly.
What
you
need
for
congestion
control
is
actually
not
that
easy,
and
this
is
just
giving
an
example
here.
C
So
what
can
happen
is
that
you
receive
all
the
packets
on
pass
one
because
that's
a
faster
pass,
but
then
because,
depending
on
like
how
you
you
decide
about
your
egg
strategy,
you
actually
trigger
an
egg
when
you
arrive
when
packet
20
arrives,
so
that's
the
packet
that
triggers
the
egg,
but
of
course
that
egg
will
will
acknowledge
all
of
the
packets
so
1
to
30.,
and
when
you
receive
that
egg,
actually,
you
don't
know
that
packet
20
triggered
the
egg,
so
you
might
not
know
how
to
calculate
the
round
trip
time
or
the
delay
of
each
of
these
passes
correctly,
because
packet
20
was
sent
on
path
two,
but
like
2030
was
sent
on
pass
one,
but
the
delay
you
basically
calculate
based
on
the
egg
is
the
delay
of
past
two.
C
So
that's
like
usually
what's
happening
is
like
you
get
the
delay,
if
you,
if
you
just
do
like
whatever
you
do
with
what
would
do
today
with
eye
calculation,
you
get
the
delay
of
the
of
the
longest
path
and
not
the
delay
of
the
shorter
pass.
So
you
have
to
be
smarter
about
this.
You
have
to
either.
You
know
add
some
more
information
to
the
to
the
egg
about
saying
like
what
which
packet
does
this
delay?
C
And
my
phone
goes
to
sleep,
and
now
I
cannot
no,
no.
I
can
control
it
again,
okay,
yeah,
so
that
was
just
to
explain
the
problem
and
I
think
we
should
have
the
discussion
actually
on
on
github,
because
it's
not
like
it's
not
like.
We
need
to
change
something,
but
I
think
we
have
to
be
clear
about
what
the
best
way
to
do
and
then
a
quick
review
about
what
we've
merged
already
based
on
this
question
from
last
time.
So
this
is
in
the
new
past
status
frame.
C
The
past
status
frame
has
like
this
past
status,
information,
which
is
the
important
part,
and
what
we
specify
is
really.
You
have
two
two
choices
here:
standby
or
available
and
what
it
does
it
just
really
gives
a
recommendation
for
the
other
site
what
to
send
on
the
pass.
It
doesn't
tell
anything
about
what
you're
planning
to
do
on
your
site
right,
but
you
give
some
hints
to
the
other
side
that
it's
preferred
from
your
point
of
view
to
use
one
or
the
other
path
or
to
not
use
it
and
that's
it
and
effectively.
C
C
Okay,
I
go
on
so.
The
other
thing
we
merged,
which
is
a
normative
thing
here,
is
that
we
other
than
for
rfc
9000,
where,
basically,
if
you
have
a
migration
event-
and
you
don't
want
to
have
the
new
pass-
you
just
don't
react.
We
actually
give
an
option
here
to
actively
deny
a
new
path
by
using
the
pass
abandoned
frame
on
the
existing
path.
C
So
this
is
just
because
we
have
to
pass
abandon
frame.
We
can
make
use
of
that,
and
that
can
just
like
avoid
some
delay.
So
it's
like
a
little
change,
but
it
can
happen.
Some
situations
to
optimize.
C
I
go
on,
we
can
come
back
to
this
okay,
and
so
this
was
the
biggest
merge
we
have
in
this.
This
site
is
mostly
recap
slightly
modified
recap,
but
mostly
recap
from
last
time.
So
the
question,
the
big
question
we
discussed
last
time
is
like
using
single,
not
multiple
packet
number
spaces
and
like
there
are
pros
and
cons
on
both
sides
right.
C
So
the
one
of
the
biggest
difference
was
like
single
packet
number
space
does
support
zero
length,
connection,
id
and
multi
packet
number
space
does
not,
and
that's
the
most
of
the
discussion
we
had
last
time
from
a
code
complexity,
point
of
view.
There
are
kind
of
pros
and
cons
on
both
sides
like
on
the
first
look
like
single
packet
number
space
seems
to
be
much
less
code
and
much
easier
for
that
point,
because
you
don't
have
to
introduce
a
new
packet
number
space.
C
But
then,
if
you
look
at
this
like
delay,
calculation
issues
how
to
get
condensed
control
right,
making
sure
you
schedule
the
packet
in
a
smart
way
that
your
excise
doesn't
grow
and
whatever
it
gets
very
complicated
and
it
gets
like
it's
not
like.
There
is
a
normative
way
to
handle.
It
is
like
depends
on
the
implication
how
to
do
this
right,
so
it's
easy
to
get
it
wrong
effectively.
C
C
You
actually
have
to
change
the
the
crypto
part,
how
you
decrypt
the
packet
number,
because
you
have
to
consider
the
congestion,
the
connection
id,
but
on
the
other
hand,
if
you
have
modern
packet
number
spaces,
it's
actually
easier
for
your
hardware
offload,
because
then
you
know
exactly
you
don't
have
to
decrypt
like
you,
don't
have
to
guess
the
packet
number,
because
you
can
much
better
figure
out
where
you
are,
and
you
don't
have
a
lot
of
out
of
order
packets
effectively
in
this
kind
of
thing.
C
So
this
is
very
much
nearly
the
same
slide
as
I
presented
last
time
and
we
just
merged
this
in
now,
which
means
basically
packet.
Multiple
packet
number
spaces
is
now
mandatory.
C
So
if
you,
if
you
negotiate
multiples
the
multi-pass
extension,
you
have
to
support
multiple
player
number
spaces,
but
it's
optional
to
use
zero
length
id
connection
id.
If
you
want
to
use
zero
length
connection
id,
you
also
have
to
implement
all
this
other
stuff
and
then
it
gets
complicated
and
we
put
a
lot
of
guidance
and
and
discussion
in
the
document
about
what
how
to
do
there
and
how
to
get
your
delay.
Calculation.
Your
easy
hand,
easy
and
handling
your
congestion
control
right,
but
it's
kind
of
optional.
C
If
you
don't
support
a
single
packet
number
space
and
the
other
end
requires
you
to
use
zero
connection
id.
Basically,
you
can
only
use
one
path,
even
if
you
have
negotiated
multiples
but
like
there's
no
obligation
to
implement
it.
At
least.
H
It
took
me
that
entire
in
sweat,
google,
it
took
me
that
entire
time
to
get
into
the
queue
and
get
the
pages
load
can
you
go
back
to
the
last
slide
of
it.
Just
curious
question
about
hardware
offload:
is
there
available
hardware
offload
that
is
for
quick
today,
because
I've
looked
into
this
and
I
haven't
actually
found
it,
and
you
might
be
right
that
it
makes
hardware
upload
more
complex,
but
it
really
depends
on
what
the
hardware
offload
api
is
and
so,
like
I'm
not
personally
familiar
with
them.
I
don't
know.
C
So,
okay.
G
So
yeah
we
should
talk
more
in.
That
was
my
issue
a
while
ago,
and
it
is
speculative
based
on
conversations
that
that
I've
had
with
hardware
vendors
in
the
past
the
issue
25
is
kind
of,
but
no
there's,
certainly
no
hardware
offload
that.
Certainly,
I
was
aware
of
when
I
filed
this
issue.
It
was
just
concerned
that,
by
messing
with
the
crypto
algorithm,
we're
going
to
have
this
other
mode
for
multipath,
which
might
not
be
a
very
viable
thing.
C
So
the
we.
So
there
is
no
hardware
offload
which
is,
I
think,
already
deployed,
but
we
did
some
research
and
we
had
some
work
about
trying
like
how
to
implement
hardware
offload,
and
we
only
went
for
the
multiple
packet
numbers
based
solution,
because
you
don't
have
a
lot
of
packet
reordering
whatever,
which
makes
your
offload
engine
actually
easier.
It's
different
than
what's
today
in
rfc
9000,
but
it
would
make
it
easier
if
we
have
multiple
packet
number
spaces
and
given
it's
not
deployed
yet
that
might
actually
be
a
reason
to
go
there
as
well.
G
C
No,
you
don't
need
special
hardware
to
do
multipass,
it's
just
like
you
have
to
optimize,
you
have
to
have,
or
you
can
do
less,
with
rfc
9000
and
you
need
more
if
you
want
to
support
multiple
packet
number
spaces,
but
it's
the
same
logic.
It's
just
like
an
extended
logic
right.
It's
not
like.
You
have
two
different
paths
to
go
for.
I
B
I
just
wanted
to
point
out
chair
hat
off
on
the
cardboard
awful
load
thing.
No
yeah
again,
there's
no
hardware
offload
the
hardware
offload
that
we
have
been
working
on.
The
vendors
involved
would
probably
not
like
multibath
anyway,
just
they
just.
They
typically
are
not
happy
about
the
hardware,
doing
anything
related
to
quick,
that's
complicated.
They
don't
like
packet
numbers
either.
So
I
would.
B
I
would
say
that
it
probably
is
true
that
the
multiple
pack
and
workspace
is
a
little
bit
easier
to
support
well,
but
it's
also
going
to
vary
depending
on
the
vendor.
You
talk
to
like
what
their
opinions
are
on,
what
is
hard
and
for
their
hardware,
because
what
what's
interesting
about
this
is
that
different
hardware
can
do
different
things
easier
than
others
when
it
comes
to
crypto,
offloads
and
quick
in
particular.
C
Yeah
so
I
mean
there
is
this
issue
in
github
and
it's
still
open
and
needs
more
discussion,
but
I
don't
think
the
decision
is
clear
in
which
direction
we
should
decide
here.
Christian.
J
So
that
means
that
whatever
offloads,
that
has
to
be
aware
of
not
just
the
connection
id
in
the
incoming
packet,
but
the
sequence
number
of
that
connection
id.
So
it's
that's
conceptually
a
really
simple
fix
in
the
api,
but
it
has
to
be
there.
This
should
not
be
I
mean
if
people
can
actually
do
hardware
offload
and
extract
the
sequence
number
and
things
like
that.
The
additional
step
to
support
multiple
number
space
is
not
that
high.
J
It's
just
that
there
are
twos.
I
mean
you
have
to
extract
42
bits
of
the
sequence
number
of
the
connection
id
and
you
have
to
use
96
bit
sequence
number,
which
is
composed
of
this
id
and
the
32
and
the
64
bit
of
the
packet.
So
it
does
change
api.
It
doesn't
change
the
complexity
very
much
because
you
need
the
cid
anyhow,
otherwise,
you're
not
able
to
find
the
context
of
the
connection,
and
if
you
can
find
the
contents
of
the
connection,
you
can't
find
the
keys.
D
C
K
Hi,
so
I
want
to
emphasize
a
couple
of
points
that
just
both
matt
and
christine
said,
but
one
of
the
things
when
thinking
about
hardware
offloaders
people
typically
use
the
pcb
model
to
think
about
it
and
which
is
why
sequence
number
seems
like
an
attractive
thing
to
consider
handing
off
to
the
offload
engine.
K
It's
not
necessarily
true
and
in
fact
true
in
the
quick
case,
for
a
number
of
reasons,
one
of
them
being
that
we
don't
necessarily
expect
sequence
numbers
to
be
sequential.
We
want
to
have
gaps.
We
want
to
be
able
to
do
various
things
with
sequence
numbers,
that's
in
fact,
part
of
the
protocol.
I
would
say
philosophy
and
even
the
design
that
packet,
our
packet
numbers
are
not
basically
stream
numbers
stream,
id
and
stream
sequence
number
sure,
that's
fine,
but
not
at
the
packet
level.
We
don't
necessarily
need
it
to
be
sequential.
K
We
don't
need
it
to
be
in
any
particular
order.
The
sender
ought
to
be
able
to
send
it
in
whatever
order
it
cares
and
the
receiver
is
able
to
handle
it.
That's
important,
and
so
I
would
say
that
going
forward
again
to
christian's
point
as
well
that
something
explicit
needs
to
be
handed
down
is
very
likely
to
need
to
be
handed
down
to
the
offload
engine
from
the
quick
sender,
and
it
is
not
something
that
you
can
simply
offload
as
a
task
to
the
offroad
engine.
K
So
I
don't
think
that
sequence
numbers
should
play
a
role
in
how
we
decide
hardware
operation.
C
G
Martin,
duke
google,
so
at
the
risk
of
going
down
the
hardware
offload
rabbit
hole
like
the
encryption
and
decryption
problems
are
quite
different.
The
encryption
problem.
Yes,
absolutely
you
just
passed
the
packet
number
with
the
packet
and
I
think
it
works
pretty
well
on
the
decryption
path.
The
the
hardware
needs
to
keep
some
state.
Obviously,
because
it
has
to
you
know,
predict
the
the
expanded
packet
number.
G
You
know
it
has
to
expand
it
from
the
the
truncated
packet
number.
So
that's
like
a
harder
problem
and
it
becomes
even
harder
with
multi-path,
but
right
if
you
build
the
ap
apis
right,
it
doesn't
matter
and
just
the
question
if
vendors
will
find
it
worthwhile
to
implement
the
apis-
and
I
don't
have
an
answer
to
that.
C
Which
is
my
last
slide,
I
think
so.
We
merged
in
this
proposal
again
saying
multiple
packet
number
space
is
mandatory.
Single
packet
number
space
is
optional
to
implement
basically
and
there's
some
more
editorial
work
here,
because,
like
the
way,
it's
it's
structured
in
the
document
doesn't
make
that
fully
cleared,
and
but
it's
really
just
editorial
work
that
we
will
do.
C
We
have
some
remaining
issues
but,
as
I
said
again
for
all
of
these
issues,
we
really
need
to
decide
if
they
should
go
should
be
addressed
in
this
document
or
in
an
extension.
What's
the
part
of
the
the
base
document
that
we
want
to
discuss.
So
it's
like.
C
I
don't
think
it's
like
hard
to
find
a
solution
to
this
problem,
but
it's
just
a
decision
of
what
we
want
to
do
here
or
later
and
then
the
most
important
part
is
that
we
really
want
more
implementation
experience,
because
you
know
if
we
all
end
up
only
implementing
multiple
packet
number
spaces.
Maybe
we
don't
need
single
map
number
packet
space
or
if
we
actually
think
that
zero
length
condition
connection
id
is
not
that
important.
C
We
don't
need
single
number
packet
space
or
maybe
there's
additional
considerations
for
hardware
offloading
that
actually
changes
our
mind.
So
I
think
these
are
the
open
questions
and
what
we
need
is
implementation.
Experience.
L
Eric
kinnear
apple,
just
a
really
quick
question
about
the
path
abandoned
stuff,
is
the
expectation
that
the
old
path
receive
both
a
path
challenge
and
a
path
abandon
at
the
same
time.
But
we
still
want
those
to
be
separate
frames.
C
So
you
try
to
open
a
second
pass
and
start
the
past
challenge
process
and
if
you
don't
want
to
accept
that
you
don't
have
to
reply
to
the
past
challenge
and
then
there's
no
path
right.
So
that's
what
we
already
have
today
and
just
like
to
to
avoid
this.
This
time,
where
you
like
time
out
until
you
get
like
a
challenge
response
back
or
not,
you
can
on
the
other
path.
That's
that's
exist.
C
L
But
if
it
when
you
first
initiate
that
that
triggers
a
challenge
to
go
down
both
paths
right
because
it
needs
to
be
validated
in
both
directions,
so
the
new
path
is
going
to
be
getting
a
path
challenge.
Saying:
hey:
is
this
actually
working
in
this
direction?
But
the
old
path
is
also
going
to
do
so
when
it
sees
someone
coming
in
from
a
new
place.
L
L
That's
sitting
in
the
middle
of
a
different
section
of
our
c9000.
Essentially
it's
for
when
the
off
path
attacker
forwards,
some
packets,
you
confirm
that
on
the
existing
path.
L
C
Yeah,
so
the
abandon
only
works.
If
you
have
pass
identifiers,
yes,
and
because
you
don't
send
it
on
the
same
path
and
like
we
should
look
up
this
part
you're
just
citing,
because
maybe
that
doesn't
make
well.
C
M
Alessandra
gadini
cloudflare
we're
starting
work
on
multi-path
right
now,
and
my
current
impression
is
on
the
single
path
versus
multiple
packet
number
spaces
question
is
that
the
single
pocket
number
space
doesn't
really
give
us
much
benefit
and
implementing
both
is
kind
of
annoying,
so
we
might
end
up
just
implementing
the
multiple
packet
number
spaces
we
might
reconsider
later
on
after
we
actually
deploy
the
whole
thing,
but
that's
my
current
impression.
M
At
least
it's
not
super
clear
what
the
benefits
of
implementing
both
are
and
a
quick
implementation
kind
of
needs
to
support
the
non-zero
connection
id
case
anyway,
so
you
have
to
implement
multiple
packet
number
spaces
and
the
single
packet
number
space
is
just.
I
don't
know
more
complexity
for
not
much
benefit.
C
So
this
is
exactly
the
point
so
like
if
you,
if
you
don't,
need
zero
length
connection
id-
that's
probably
additional
overhead-
you
wouldn't
want,
but
like
we're
looking
for
people
who
actually
have
a
use
case
for
zero
length,
connection
id-
and
I
mean
one
of
the
benefits-
is
like
you-
save
a
few
bytes
right
so
that
for
some
use
cases
that
might
be
really
important
and
like
there
are
two
options
here
I
mean
the
one
option
is
also.
C
We
could
try
to
treat
the
multiple
packet
number
space
solution
to
support
at
least
zero
connection
id
in
one
direction.
There
has
been
proposals,
but
it's
also
like
some
additional
complexity.
It's
not
here
if
it's
needed.
The
other
option
is
also
like
put
the
zero
length
connection
id
support
in
an
extension
or
whatever,
but
yeah.
That's
the
feedback,
we're
looking
at.
M
So
I
think
the
the
the
zero
connection
id
case
is
mostly
a
client
use
case.
Maybe
I
think
some
browsers
use
zero
connection
ids,
so
supporting
on
only
one
side
might
be
okay,
yeah.
E
Hey
yeah,
I
I
don't
need
to
queue
on
the
chair.
I
could
have
just
interjected,
but
I
thought
it'd
be
polite.
We've
had
a
few
comments
in
the
chat
about
my
squaring
for
anyone
that
wants
to
be
not
in
the
room
during
the
initial
chess
fights.
This
is
a
requirement
of
your
attendance
to
to
wear
a
mask
at
all
times
during
these
meeting
sessions.
So
please
take
this
seriously.
This
is
important
for
our
health.
We
all
want
to
be
able
to
go
home
at
the
end
of
the
week.
H
Thanks
ian's
right,
google,
I
I
was
more
of
a
single
packet
number
space
fan
as
an
individual,
but
I
have
to
say
that
requiring
both
seems
worse
than
just
requiring
multiple
packet
number
spaces
in
terms
of
zero
length
connection
ids
the
use
cases,
I
think
probably
could
be
preserved
so
like,
for
example,
there
are
cases
when
chrome
will
open
two
different
ephemeral
ports
at
the
same
time,
and
so
chrome
would
have
like
two
different
sockets
and
I
would
think
it
would
be
perfectly
easy
to
distinguish
which
path
a
pack
arrived
on,
based
on
the
socket
arrived
on
things
like
that.
H
G
Me
again
so
to
be
clear:
I'm
not
implementing
this,
but
to
me
the
attraction
of
the
single
packet
number
space
is
to
like
not
write
a
lot
of
code
and
get
like
cheap
multi-path
cheap
in
terms
of
coding
effort.
But
if
we're
gonna
require
multiple
packet,
if
we're
going
to
acquire
a
multiple
packet
number
of
spaces,
then
like
that's,
that's
that's
are
you.
You
have
to
pay
the
entry
cost
like
the
point
of
single.
It
seems
much
weaker
to
me.
C
Yeah-
and
I
think
this
is
actually
kind
of
a
wrong
assumption,
because
it
looks
much
easier
from
the
first
place,
but
then
actually
getting
all
the
packet
scheduling,
delay,
calculation,
right
and
so
on.
It
might
be
actually
more
code
at
the
end
and
it's
it's
code
that
is
kind
of
not
well
specified
in
the
end.
So
you
end
up
with
like
having
different
implementation,
doing
different
things
and
then
because
the
other
end
is
doing
something
weird,
your
performance
goes
down
and
you
don't
have
any
control.
C
So
it's
I
mean
it's
not
only
optimizing
right,
it's
really
just
like
the
the
delay.
Calculation
example
showed
this.
This
just
doesn't
work
correctly.
It's
like
giving
you
some
crap
and
like
yeah.
If
you
just
want
to
make
sure
you
can
send
one
packet
from
here
to
there
it's
fine,
but
if
you
actually
want
to
do
the
right
thing,
it's
you
cannot
ignore
it.
C
So
I
mean
the
point:
is
multiple
packet
number
spaces?
Don't
support
zero
connection
id
in
both
directions?
There
is
a
way
there
are
proposals
to.
So.
If
we
need
this
use
case,
then
we
need
something
right
if
there
is
a
way
to
maybe
support
your
connection
id
in
one
direction,
but
it's
also
giving
you
more
complexity
and
more
ambiguity
so,
like
it's
also
additional
code.
So
it's
really
the
question:
what
do
we
do
with
your
connection
id?
Is
it
a
use
case?
We
want
to
support
or
not.
N
Ownership,
your
apple,
like
many
other
early
informators,
we
are
leaning
towards
multiple
packet
spaces
and
one
if
we
are
having
trouble
finding
current
evidence
that
the
zero
lens
connection
that
single
spaces
are
needed.
Today,
we
may
consider
emulating
zero
lens
cds
by
injecting
a
known,
constant
and
essentially
making
it
a
nonzero
lens
and
hence
injecting
that
problem
space
into
the
other.
I
Hi
luke
from
twitch,
I'm
noticing
in
a
few
working
groups
that
there's
this
tendency
to
put
a
context
or
like
a
packet
space
id
in
to
distinguish
between
sessions
mask.
Has
it
web
transport
we're
talking
about
a
session
id
and
here
we're
part
of
the
path
in
there?
I'm
wondering
if
overloading
the
connection
id
is
the
right
approach.
Is
this
something
we
want
to
do
for
like
making
load
balancing
more
difficult?
C
I'm
open
to
have
this
discussion,
but
like
one
of
the
decisions
we
made
earlier
on,
is
that
we
want
to
keep
the
changes
to
the
base
pack
minimal
right,
and
that
would
be
a
big
change.
I
D
J
J
The
single
space
solution
was
five
percent
less
efficient
about
five
percent
as
efficient
than
the
multiple
space
solution,
and
I
took
that
as
a
challenge
like
can
I
make
them
to
be
equally
efficient
and
the
answer
is
yes.
I
could-
and
I
did
that
in
in
my
prototypes,
but
to
do
that-
I
had
to
add
a
lot
of
code
in
the
act
scheduling
pass.
J
I
had
to
add
a
lot
of
structure
in
the
store
of
packets
to
make
sure
that
packets
carried
not
just
the
packet
number
but
were
also
linked
to
the
path
in
which
they
were
sent
and
that
we
can
retrieve
how
many
packets
were
sent
before
them
after
them,
etc,
etc.
Overall,
that
complexity
as
miria
said,
is
much
more
than
the
single
number
of
space.
J
So
that's
the
reason
why
I
did
support
and
in
fact
that
proposed
this
unified
proposal.
If
I
mean
the
reason
for
the
additional
complexity
is
the
need
to
support
the
zero
length
cid
case,
and
we
need
feedback
from
implementers
there,
whether
that
zero
length
cid
case
is
actually
needed
with
multipass
or
not.
That's
what
we
need.
C
Yes,
I
think
you
know
that's
the
main
question
and
if
people
are
having
a
use
case
for
zero
links,
id
and
multipass,
it
would
be
good
to
learn
about
that.
Maybe
put
it
that
way.
H
You
answered,
google,
I
was
just
gonna.
Add
that
I
mean
a
byte
is
fine.
I
mean
at
least
for
chrome
and
other
browsers
and
such,
but
also
I'm
still,
I
think,
I'm
not
fully
understanding
the
problem.
It
seems
like
if
I
can
distinguish
at
the
receiver
which
path
I'm
receiving
something
on,
then
I
can
decide
to
use
a
zero
link
connection
id
and
if
I
can't
then
I
need
to
use
a
non-zero
length
connection
id
and
this
seems
true
in
the
existing
server
deployments
we
have
today
right.
I
don't.
H
I
guess
I'm
not
understanding
like
why
multipath
makes
this
actually
different
like
like.
If
I
can
tell
from
the
affirmative
report
like
this,
is
pathe
or
about
b
then,
like
I,
don't
need
a
connection
id,
but
if
I'm
a
server
I'm
receiving
on
443-
and
I
have
10
000
connections
then
like
clearly,
I
need
a
connection
id
to
like
multiplex
them.
C
So
it
does
work,
but
it
it
just
makes
your
scheduling
and
your
acknowledgement
code
more
complex
to
get
it
right
right.
You
have
to
decide
which
packet
numbers
to
use
on
with
which
part
of
the
packet
number
space
to
use
on
which
path,
and
that
impacts
a
lot
like
how
your
acting
will
look
like
it
impacts
your
ex
size,
and
you
also
don't
really
know
what
the
other
end
is
doing
like
how
often
do
they
send
x
or
not,
and
when
you
get
the
packet
back,
there's
like
ambiguity
about
ecn
markings.
C
Usually
you
don't
know
where
the
ecn
magnets
belong
to
which
path,
so
you
have
to.
If
you
get
an
ecn
marking,
you
have
to
reduce
your
congestion
window
on
both
paths,
because
that's
the
most
conservative
thing
to
do,
or
you
have
to
be
somehow
smart
about
acting,
but
that's
the
other
end
that
you
don't
control
and
the
same
is
true
for
egg
delay.
Calculation.
Usually
you
will
only
be
able
to
calculate
the
delay
on
the
longest
path.
H
Yeah,
I
I
would
completely
agree
I
was
I
was
merely
asking.
I
think
I
didn't
fully
understand
why
we
can't
have
zero
length
packet,
zero
length,
connection
ids
with
multiple
packet
number
spaces.
I
think
that's
the
question.
I
still
don't
fully
understand,
but
I
can
take
it
off.
C
H
G
I
don't
know
if
I
can.
I
think
the
issue
is
not
rebinding,
if
you're
not
reminding
on
the
path.
There's
a
lot
of
ambiguity.
C
J
A
J
C
K
Thank
you,
so
I
I
it
sounds
to
me
like
there's
a
fair
amount
of
agreement
on
doing
multiple
packet
number
spaces.
It
also
sounds
to
me,
like
there's
a
possibility
of
considering
what
else.
K
K
This
is
an
issue,
that's
been,
like,
you
know,
doing
the
rounds
for
quite
a
while,
and
we've
got
some
experience
here.
K
C
Yeah,
so
the
audio
wasn't
super
great,
but
I
think
you
said
we
should
decide
now
if
you
want
to
support
zero
links.
Id.
K
Yeah,
I
don't
know,
can
you
hear
me
now?
Is
it
better.
D
C
So,
like
yeah,
I
mean,
as
I
said,
is
probably
janna
also
said.
I
believe,
and
the
question
is
really
not
about
like
doing
one
or
the
other
doing
single
or
doing
multiple
pack
number
spaces,
I
think,
might
what
we
decided
already
is
that
multiple
factor
number
spaces
is
the
way
to
go
and
single
might
be
optional
or
not.
So
the
question
really
is:
do
we
want
support
for
zero
lengths
connection
id?
That's
the
question
to
ask
I'm
happy
to
make
a
decision
now.
C
C
E
C
E
Just
just
from
me
with
my
hat
on,
but
my
opinion,
this
has
been
some
good
chat
and
some
good
engagement.
Thank
you.
Folks.
We've
taken
the
comments
down
about
the
potential
consensus,
call
look
out
for
that.
We'll
discuss
the
follow-up
on
the
list.
If
there's
anything
to
be
done,
one
question
I've
got.
You
talked
a
couple
of
times,
mary
about
implementation
experience,
and
you
know
alexandra
mentioned
that
you
know
the
the
quiche
project
is
looking
to
do
some
work.
E
I
wonder
if,
if
maybe
at
the
next
hackathon
in
in
november,
we
should
try
and
form
another
quick
table
and
actually
try
and
bring
some
folks
to
do
some
implementation
and
interrupt
and
create
a
target
and
maybe
come
up
with
some
test
cases.
We
don't
need
that
answer
right
now,
but
if
people
are
interested
in
that,
let's
try
and
take
that
discussion,
offline.
O
Yeah
lars
eckhart,
so
I'm
not
super
closely
following
this
discussion
in
in
quick,
but
back
when
we
did
multiple
tcp
the
research,
not
the
standard,
we
actually
tried
to
do
the
single
space
which
we
quickly
found
out.
O
We
actually
have
the
option
of
doing
the
single
space,
but
maybe
we
have
now
enough
experience
with
multi-graph
tcp
in
the
multiple
spaces
to
see
benefits
that
we
didn't
see
way
back
when,
but
I
kind
of
sort
of
without,
as
I
said
not
having
followed
this
in
detail,
I
kind
of
sort
of
wondered
that
the
one
space
back
then
that
we
thought
would
be
the
way
to
go,
wouldn't
be
the
way
to
go
here.
But
since
I'm
not
implementing
it,
I'm
not
going
to
deploy
it.
B
C
Yeah,
because
in
tcp
you
have
the
the
on
path,
sequence
number
and
then
you
have
a
separate
thing
right.
Yeah-
and
this
is
not
the
kind
of
packet
number
space
we're
talking
about
here.
It's
really
like
the
onpass
packet
number
space,
and
it's
really
like
do
you
use
the
same
numbers
on
both
passes
or
do
you
use
different
numbers,
the
different
set
of
numbers
on
on
passes,
but
it's
the
same
element
right.
It's
not
like
in
tcp,
where
you
actually
have
this
additional.
O
K
Hi,
hopefully
you
can
hear
me
reasonably
I'll
shut
up.
If
you
can't
hear
me.
K
Okay
largest
point
class:
yes
exactly.
I
think
that
the
difference
here
primarily-
and
so
I
I'll
also
speak
about
not
just
mpdcp
but
even
almost
20
years
ago.
I
did
this
for
scdp.
K
I've
used
sctp,
multipath
implementation
and
the
reasons
that
we
did
that
and
that
those
are
exactly
the
reasons
that
don't
apply
to
quick,
specifically,
both
sctp
and
pcp
had
assumptions
about
linear
sequencing.
Both
the
protocols,
the
senders,
the
receivers
and
metal
boxes
in
the
tcp
case
all
have
assumptions
about
sequencing
of
packet
numbers.
We
don't
have
that
in
quick,
and
so
that's
actually
a
freeing
thing
in
implementing
this
with
single
packet
number
space
in
sctp.
K
One
of
the
things
I
quickly
found
out
was
that
eventually
near
the
end
near
the
end,
meaning
after
after
a
while
of
working
and
implementing
this.
My
conclusion
was
that
separate
sequence
number
spaces
would
actually
have
made
things
far
simpler,
because
a
lot
of
the
things
that
a
transport
does
its
recovery
condition.
K
We
typically
tend
to
build
a
certain
amount
of
reordering
tolerance
within
a
path,
but
sort
of
really
blows
us
out
of
the
water,
and
you
don't
want
to
use
the
same
ideas
of
reordering
tolerance
to
multipath,
because
in
multiple
we
have
a
lot
more
information.
We
actually
know
that
there
are
two
or
multiple
different
paths
within
which
we
can
expect
a
certain
amount
of
sequencing.
So
all
of
that
to
say
that
multiple
sequence
numbers
actually
make,
except
what
trying
to.
K
That
was
max
in
the
sctp
case.
It
was
backward
compatibility.
It
was
the
fact
that
ag
compression
wasn't
very
good
when
you
split
the
sequence
number
space
and
so
on
in
quickly
explicitly
and
deliberately
allowed
for
those
things
to
exist
and
by
design
and
by
philosophy.
We've
allowed
for
multiple
sequence
number
spaces
to
exist,
and
so
I
strongly
urge
us
to
go
towards
that
and
if
only
things
are
falling
apart
badly,
should
we
consider
single
sequence,
number
space.
C
Yeah
thanks:
that's
I'm
used
to
input
and
actually
last
we
already
have
two
secret
number
spaces
in
quick.
One
is
the
packet
number
and
one
is
a
frame
id
right
and-
and
that
makes
it
nice
to
actually
have
this.
The
split
and
using
not
to
pick
a
number
space
on
the
packet
number
jonathan
just
disappeared.
E
Yeah
we're
kind
of
at
time
I
locked
the
queue
just
just
so
we
can
make
progress
and
not
spend
too
many
times
on
this
question.
If,
if
anyone
really
wants
to
say
something
do
so
now,
ian
no.
E
Yeah,
I
think
this
has
been
very
helpful
and
thank
you
all
for
your
inputs.
P
Q
H
Thank
you.
I'm
ian
sweat,
I'll
talk
about
act
frequency
and
give
an
update.
H
Great,
so
there
were
some
updates
recently,
mostly
editorial,
as
well
as
some
additional
normative
text
in
areas
where
there
really
was
intended
to
be
normative
text,
but
we
never
actually
bothered
to
write
anything
down
such
as
with
ignore
c.
H
So
this
is
an
overview
of
the
two
frames
that
we
have
the
act
frequency
frame.
It
allows
you
to
tell
the
peer
how
often
they
should
send
acknowledgements
to
you
how
long
to
delay
them
so
on
and
so
forth,
and
then
an
immediate
frame
that
allows
you
to
elicit
an
immediate
acknowledgement.
H
H
So
the
first
one's
an
easy
question:
should
this
be
a
one
byte
frame
type?
We
probably
want
to
stick
it
with
pto
packets
in
a
lot
of
cases,
so
I
guess
it's
slightly
more
likely
to
fit.
If
it's
one
byte,
we
might
want
to
send
it
reasonably
often.
This
is
a
issue
filed
by
martin
seaman.
H
C
I
just
left
the
meeting,
so
I
can't
press
a
button.
I
think
it
depends
on
how
often
you
expect
to
send
this
right,
and
that's
not
super
clear
to
me
but
like
if
you
send
it
off
and
you'll,
be
on
mine.
R
Who
clicked
the
button
faster
come
on?
Martin?
You
got
this
like.
We
only
have
like
60
of
the
short
bite.
Packets
and
extensions
on
average
might
need
one,
maybe
sometimes
two
we're
going
at
a
rate
of
one,
maybe
two
extensions
per
year,
so
we're
not
gonna
run
out
anytime
soon.
I
would
say
for
working
group
extensions
where
it
looks
like
the
majority
of
folks
are
going
to
implement
them.
Yeah
get
a
one
by
code.
Point
done.
P
S
So
I
don't
think
an
extra
bite
is
a
problem
here,
and
I'd
rather
see
it
left
as
it
is.
I
I
realized
david's
argument
is
a
is
a
fine
one
and
we
do
have
space,
but
I
mean
we
also
don't
have
to
use
it
so
leaving
it
as
it
is,
is
fine
with
me.
By
the
way
the
stats
are
like
we
don't
get
ptos
on,
like
80
plus
percent
of
our
connections.
It's
it's
crazy.
H
Q
H
M
K
I'm
trying
I'm
trying
trust
me,
so
we
can.
K
This
I
I
don't.
I
wonder
if
this
is
again
one
of
those
trying
to
optimize
it
well
before
we
really
have
a
problem
here
situation.
I
I
think
that
we
should
I
mean
we
can.
We
can
go
with
one
bite.
We
can
go
with
two
bytes,
it
doesn't
matter
much.
I
would
suggest
that
we
leave
it
as
it
is
until
we
have
more
experience
from
from
various
folks
and
we
can
always
come
back
revise
the
draft
come
up
with
a
new
version
and
stuff
of
that
sort.
K
At
this
point,
getting
this
out
and
getting
this
into
people's
hands
is
more
useful.
In
my
opinion,
so
the
leaving
it
at
leaving
it
as
it
is
is
what
I
would
suggest.
K
I
agree
with
martin's
comment
that
this
is
not
something
we
expect
to
send
very
frequently
immediately
if
you're,
if
you're
immediate,
if
you're
asking
immediate
acting
to
happen
all
the
time,
it
seems
that
it
seems
like
there
might
be
a
problem
somewhere
else,
but
yeah.
G
Martin,
duke
google,
I
would
suggest
that
we
are
approaching
bike
shed
territory
on
this,
and
probably
nobody
has
is
gonna
lie
down
on
the
road
one
way
or
the
other.
So
having
gotten
pros
and
cons,
I
suggest
the
editors
just
like
make
a
decision
and
I'm
sure
everyone
can
live
with
it
and
if
that's
not
true,
then
someone
will
correct
me.
R
Right,
it's
been
pointed
out
that
I
didn't
properly
state
my
affiliation
last
time.
So
sorry,
davidskenazi,
quick
enthusiast.
I
stood
up
to
say
something
similar
to
what
martin
said.
What
a
beautiful
bike
shed
you
have.
I
really
wonder
what
color
it
should
be.
So
yeah
I'm
not
going
to
lay
on
the
tracks
here.
I
don't
care
that
much
have
the
editor's
pick
move
on.
That
sounds
great.
H
All
right
how
about
we
have
martin
talk
and
then
I'll.
I
might
actually
just
shelve
this
for
a
few
months
and
just
get
some
more
utilization
and
different
use
cases
of
immediate
active
as
well,
but.
P
Yeah,
so
sorry
for
opening
up
this
bike
shed.
So
the
reason
I
opened
this
issue
is
because
the
draft
says
that
there
are
some
some
cases
where
you
want
to
send
this
frame
at
least
once
per
rtt.
I'm
not
fully
sure
if
I
understand
why
you
would
want
to
do
this.
Probably
we
can
add
some
more
text
about
that.
If,
if
you
actually
only
only
send
this
on
the
pto,
then
I
don't
care,
but
if
you
send
it
once
per
rtt,
this
might
be
worth
doing.
H
Yeah
that
could
probably
use
more
decks.
I
think
the
idea
was
if
you
wanted
more
fine-grained
control
over
when
you
received
acknowledgments.
It's
particularly
if
you
were
using
particularly
long
active
layers
in
terms
of
packets
in
time,
but
yeah.
H
So
there's
a
lot
of
discussion
in
various
pr's
about
whether
ignore
ce
is
useful
or
one
could
say
harmful,
but
maria
was
very
kind
in
her
issue
title
it's
also
been
restricted
to
pheromone,
so
it's
it
now
says
it
should
not
be
set.
If
the
sender
sets
ect1
in
its
outgoing
packets,
such
as
for
l4s.
H
I
think
at
this
point
I
don't
think
anyone
is
using
this,
and
so
I
can
see
an
argument
that,
given
it's
not
seeing
active
use,
we
should
just
jettison
it
because
we
don't
need
it
right
now
and-
and
I'm
not
sure
I,
as
an
editor,
want
to
spend
a
bunch
of
time
like
arguing
for
ecn
features
that
I'm
not
going
to
immediately
use,
but
before
we
remove
it
or
something
you
know,
we
didn't
have
a
consensus,
call
to
add
it.
S
So
martin
thompson-
I'm
not
going
to
be
using
this
in
the
foreseeable
future,
but
I
can
understand
why
other
people
might
say
that
they
they
would
like
to
to
do
something
like
this.
The
question
not
that
I
have
for
those
people
is,
if
you're
doing
something
like
l4s
or
planning
to
do
that.
Would
a
transport
parameter.
C
I
might
have
a
similar
point
so,
like
I
think,
actually
I
personally
think
the
answer
is
no,
but
that's
not
my
main
concern.
My
main
concern
is
that
I
think
actually,
this
shouldn't
be
part
of
this
specification,
because
it
really
depends
on
on
the
congestion,
crawl
and
the
and
the
ecm
mechanism
to
specify.
What's
the
best
thing
to
do,
and
and
so
like
you
shouldn't,
have
you
don't
shouldn't?
T
H
T
C
It's
just
the
speaker's
part
yeah,
it's
also
like.
If
you
want
to
do
it
right,
then
it
would
be
more
complicated
than
that
so
because
you
should
kind
of
never
ignore
the
first
ce,
but
you
might
not
react
to
like
follow-up
ces
immediately
or
whatever.
So
it's
like
this
is
also
too
simplistic
to
make
it
useful.
H
It
sounds
like
I'm
inclined
to
remove
it,
given
where
we're
at.
We
can
always
add
it
back
later,
if,
if
people
decide
that
there's
a
compelling
use
case
and
actually
want
to
implement
it,
but
I
do
not
want
to
ship
a
draft
where
we
have
a
feature
with
no
one
who's
implemented
it.
Oh
janna.
I
K
When
we
put
this
in
just
to
be
clear,
we
weren't
fully
certain
how
much
how
much
traction
this
would
get,
but
we
can
offer
it
in
expected.
Thinking
that
there
was
there
was
an
ask
for
it
sure
we
have
ignore
reordering.
So
we
wonder
if
we
should
have
this
as
well
and
so
yeah.
If
there's
no
strong
push
for
it,
we
shouldn't
add
features
that
don't
have
people
asking
for
it:
cool.
H
H
Okay,
the
next
one
is
kind
of
a
can
of
worms,
unfortunately
not
from
a
design
perspective,
but
just
it's
it's
a
challenging
problem,
which
is
exactly
how
much
text
do
we
want
to
put
in
there
about
examples
or
suggestions
of
how
to
use
this
extension.
H
There's
one
open
issue,
but
there
have
also
been
other
side
comments,
particularly
in
the
context
of
like
reno
or
cubic
about
what
is
the
right
number
like?
Is
it
8x
per
rtt
or
like
what
what
magic
number
do?
I
need
to
stick
in
there
to
get
like
decent
performance
out
of
like
reno
and
cubic,
and
I
honestly
don't
have
a
really
good
answer
off
the
top
of
my
head.
H
P
P
Martin
zeman,
given
that
rfc
9002
specifies
a
congestion
control
algorithm
that
should
be
safe
and
performant
for
for
the
use
on
the
public
internet
would
be
really
nice
if
this
extension
also
suggested
something
that
will
be
safe
and
performant
on
the
on
the
internet.
Not
saying
that
you
can't
do
something
better,
but
it
would
be
really
helpful
to
have
something
in
that
text.
For
people
who
don't
have
a
internet
wide
deployment
to
run
their
own
measurements.
H
As
a
follow-up
question-
because
I
don't
have
this
off
the
top
of
my
head,
does
anyone
have
a
set
of
constants
that
they've
used
with
regional
or
cubic
with
that
frequency
frame
that
they
find
work
well?
H
Or
is
this
something
where
I
need
to
go
run
some
experiments
and
try
to
like
get
some
data
because
we
currently
run
bbr
by
default?
It
at
google
and
currently
don't
have
any
cubic
or
reno
experiments,
but
we
still
have
the
code,
so
we
could
go
back
and
run
those
if,
if
that
was
kind
of
the
way
forward
on
this.
U
B
Has
someone
that
has
been
thinking
about
experimenting
with
this
with
an
internet-wide
deployment?
The?
I
don't
think
that
we
should
put
anything
or
rather
have
the
onus
that
this
document
prescribes
something,
because
I
think,
if
we
have
that
requirement,
this
is
going
to
potentially
be
blocked
on
publication
for
essentially
an
indefinite
period
of
time,
because
the
answer
is,
we
really
don't
know
what
to
do
here.
B
I
don't
think
anyone
has
experience
using
this
successfully,
yet
with
a
variety
of
congestion
controllers,
certainly
not
the
one
that
we
specify
in
the
base
draft,
which
you
know
is
not
really
being
widely
used,
and
so
my
thought,
my
personal
opinion
is
that
this
document
can
serve
as
a
mechanism
document
and
if
people
have
strategies
that
they
want
to
use
that
they
they
find
a
success
with
those
can
be
follow-up
documents
that
are
specified
hey.
This
is
how
you
can
use
act
frequency
for
this.
B
This
is
how
you
can
use
act
frequency
to
achieve
this,
because
even
with
a
single
congestion,
controller
you're,
not
necessarily
going
to
be
have
the
same
strategy.
So,
for
example,
if
you
want
to
use
cubic
with
that
frequency
in
a
data
center
versus
cubic
with
hack
frequency
on
like
the
internet,
with
a
very
low
latency
network
on
the
internet
like
or
high
latency
network
on
the
internet,
I
think
the
strategies
are
going
to
vary.
I
mean
we
don't
know
at
this
point
what
those
strategies
are
going
to
be.
H
T
That
10
is
not
a
bad
number
to
kick
out
on
for
a
starting
point:
it's
probably
a
bad
number.
In
the
end,
it's
probably
a
bad
number
at
a
data
center,
it's
probably
a
reasonable
number
for
an
internet
path.
Can
we
just
see
what
we
can
come
up
with
here
and
see?
T
We
can
make
a
recommendation
before
we
publish
of
a
starting
value
from
which
we
can
work
for
that
we
consider
safe
because
surely
two
is
very
conservative
and
surely
an
rtt's
worth
or
two
rtt
smith
is
highly
controversial,
so
we
must
be
able
to
say
something.
Well,
I
suggest
we
pump.
We
we
work
on
it.
H
Thanks
cory,
yeah
I
mean
there.
There
are
some
boundaries,
I
think
about
like
trying
to
get
an
hack,
every
rgt,
typically
and
such
just
for
guardrails,
but
that's
about
all
all
that
there
is
right
now,
okay,
why
don't?
I
just
keep
this
open
and
we
can.
We
can
work
on
it
over
time.
It
doesn't
seem
like
something
we
need
to
fix
today.
Thanks.
H
Okay,
so
this
is
one
that
we
talked
about
quite
a
while
ago.
A
number
of
other
people
have
looked
at
the
pr
again.
How
much
time
do
I
have
left.
P
H
Five
minutes:
okay,
thank
you
all
right,
so
we
can
at
least
go
over
this
again
and
swap
it
back
in
and
if
we
don't
get
to
the
end,
that's
that's
completely
fine.
So
this
is
the
idea
of
okay.
So
one
act
is
sent
immediately
upon
a
missing
packet.
H
H
Then
you'll
get
one
immediate
ack
and
then
you
know
you'll
have
to
wait
for
the
other
10
packets
to
arrive
and
then
you'll
send
another
immediate
act
and
then
you'll
finally
detect
that
that,
like
missing
packet
was
lost,
because
if
it's
threshold
based
and
it's
a
threshold
based
loss,
so
you
start
out
with
the
last
threshold
of
three.
H
So
this
can
actually
substantially
delay
loss
detection
in
certain
situations
versus
the
base
draft.
Once
you
start
bumping
up
the
active
eliciting
threshold
so
anyway,
the
potential
solution
to
this
is
to
communicate
the
reordering
threshold
to
the
receiver
instead
of
just
saying
ignore
order,
and
then
the
receiver
sends
an
immediate
ack
whenever
there's
a
missing
packet
in
this
range.
That
basically
would
allow
the
sender
to
immediately
declare
loss.
H
So
it's
basically
an
optimization
as
well
as
kind
of
a
safety
mechanism
to
say
I'm
going
to
declare
loss
at
this
many
packets,
and
so
whenever
you
see
you
know
a
missing
packet
that
is
in
that
range,
and
so
that
would
allow
me
to
immediately
declare
loss
like
send
me
an
act
immediately.
So
I
can
like
do
that
as
quickly
as
possible
is
basically
what
this
is
saying
is
a
mechanism.
H
It
slightly
reduces
the
number
of
acts
when
packets
are
also
received
out
of
order,
because
if
you
get
a
twiddle,
you
know
one
software,
the
other.
Instead
of
sending
an
immediate
act
and
then
sending
another
immediate
app,
when
the
gap
is
filled,
you
like
don't
send
the
first
ack
and
then
the
gap
gets
filled
and
like
everything's
fine.
So
it
turns
out
for
reordering
on
the
internet.
H
Twiddles,
I
think,
are
something
like
60
of
all
reordering,
it's
some
huge
number
at
least
last
time
I
looked
so
there
might
be
some
small
performance
improvement
in
low
scale,
reuterium
networks,
but
anyway
that's
an
overview
of
the
issue,
especially
now
that
we
have
dropped
we're
going
to
drop,
ignore
ce.
We
have.
The
bits
to
you
know
make
this
like
a
one
byte
number,
which
seems
like
a
sufficient
amount
of
granularity,
or
we
can
just
make
it
of
our
end.
It
doesn't
really
matter.
E
So
if,
if
people
have
comments,
please
jump
in
the
queue
now,
otherwise
you
know
we
we
can
let's
move
on,
but
thank
you
ian.
V
H
Thank
you
very
much
for
your
time
and
I
think
I
will
give
my
career.
G
G
No
okay,
very
good.
So
the
authors
for
quick
ob
are
pleased
to
report
that
all
github
issues
are
currently
closed.
If
you're
interested
in
that,
I
invite
you
to
look
at
the
change
log
or
the
submit
closed
prs
there.
The
only
one
that
probably
worth
at
least
bringing
up
is
related
to
the
crypto
review
that
we
received
on
the
for
pass
algorithm.
G
So
this
is
the
short
cid
encrypted
version.
There
was
a
simple
fix
that
they
suggested
that
we
made
there
was
another
fix.
There
was
another
suggestion
to
kind
of
the
best
practice
would
be
to
make
it
a
12-pass
algorithm.
We
declined
to
do
that
for
reasons
that
are
probably
obvious
and
christian
submitted
a
pr.
Thank
you
christian.
That
explained
the
reasoning
there
in
the
security
considerations.
G
I
invite
you
to
look
at
that
email
file,
an
issue
if
you
have
a
concern
and
if
you
have
any
questions
about
that
now,
I'm
happy
to
answer
them,
but
I
really
wanted
to
talk
about
is
just
the
future
path
of
this
document.
As
I
said,
like
we've
kind
of
reached
the
end
of
our
editorial
process.
As
far
as
we
can
do
it,
so
one
option
is
just
to
go
to
working
with
last
call,
which
will
of
course
generate
more
issues
but
then,
but
basically
move
the
expedite
moving
the
document
forward.
G
G
There
are
a
couple
drafts
out
of
date,
but
because
they're,
both
load,
balancers
and
no
servers
have
implemented
at
this
time.
We
cannot
interrupt
because
we
need
both
sides
of
that
interaction.
I
am
currently
my
current
focus
and
my
day
job
is
to
implement
quick
lb
for
google
quiche
on
the
server
side.
So
we
will
have
a
server
implementation
soon
and
also,
probably
certainly
within
the
next
12
months,
to
deploy
this
at
google
so
option
two
so
option.
One
is
working
group
last
call
immediately
option.
G
Two
is
to
wait
for
that
implementation,
deployment
process
to
continue
and
then
working
with
last
call
and
option.
Three,
I
guess
would
be
to
wait
for
more
people
who
are
not
martin
duke
to
implement
this
thing,
but
who
knows
how
long
that
will
be
so
that
is
all
I
have
to
present,
and
I
would
love
to
hear
comments
from
the
community
of
what
they
would
like
to
see
before
we
work
in
loop
last
call
jonathan.
D
When
my
audio
comes
up,
yeah
janna
wanted
to
read,
wanted
me
to
relay,
because
he
has
audio
trouble
about
the
difference
between
ignore
order
and
reordering
threshold.
D
G
C
G
Okay
wearing
someone
else's
document,
but
I
don't
see
anyone
commenting
nobody
in
the
queue
so
all
right.
So
returning
to
the
question
does
anyone
have
an
opinion
about
the
maturity
of
the
quick
lb
document
and
his
readiness
working
group
last
call.
C
Yeah
cool
event:
again,
I
don't
have
a
concern
to
go
to
work
in
google
glass
call
right
now,
but
if
there's
like
no
good
reason
that
we
need
to
publish
it
immediately,
why
don't
we
wait
for
your
implementation.
G
Okay,
I'm
seeing
thumbs
up
sprinkling
a
thumbs
up
with
that
sentiment,
which
is
fine
with
me.
So
you
will
probably
not
hear
from
me
for
a
while
about
this
document
and,
like
I
will
do
another
report
once
things
are
a
little
more
mature
at
google
in
terms
of
doing
this
work
and
retry
offload
just
while
I'm
up
here
is
in
a
similar
stasis
state,
but
there's
no
active
invitation
going
on
so
that
is
sort
of
in
the
deep
freeze
until
somebody
is
is
shown
to
care.
H
L
Eric
kinnear
apple,
I
don't
think
we
have
plans
to
immediately
deploy
anything,
but
we've
certainly
been
looking
and
trying
to
provide
feedback
as
we
go
and
need
something
with
a
very
similar
set
of
attributes.
So
I
would
not
give
up
hope
and.
G
I
I
would
just
like,
while
I'm
up
here,
make
an
appeal
to
like
server
implementers
that
I
mean
one
of
the
intents
here
like
the
original
target
use
case,
for
this
was
the
cloud
that
cloud
l4
balancers
would
implement
this,
and
then
you
know
people
could
deploy
quick
servers
and
they
could
all
interoperate.
G
I
think.
Certainly
our
long-term
plan
is
to
do
that
in
google
cloud.
So
you
know,
I
think
those
of
you
trying
to
position
your
servers
like
this
might
be
an
interesting
market
opportunity
anyway,
if
there
are
no
other
questions,
I'm
going
to
return
the
balance.
My
time
to
the
group.
Thank
you.
E
Cool
next
up,
we
have
christian
talk
about
quick
time
stamps
as
we
enter
the
phase
where
we
stop
talking
about
adopted,
work
items
and
talk
about
related
or
potential
new
work.
J
P
C
J
J
J
The
first
design
of
the
time
stamp
was
to
solve
the
asymmetric
case
in
which
there
is
high
bandwidth
available
on
the
same
path,
but
limited
bandwidth
on
the
or
congestion
on
the
return
pass
and
the
the
failure
mode
there
is
that
you
can
often
observe
congestion
on
the
global
pathway.
It's
only
caused
by
the
return
pass
and
the
response
shall
be
very
different.
J
So
with
a
time
stamp.
What
happens
is
that
the
acknowledgement
are
sent
in
a
packet
that
is
time
stamped
and
the
the
congestion
controller
can
use
that
timestamp
to
understand
which
way
the
condition
happened
and
trigger
a
different
reaction,
and
that
suppress
things
like
a
longer
t
measurement
or
wrong
decision
condition,
control
or
even
stereotype
transmissions.
J
We
improve
responsiveness,
we
get
better
connection
control,
we
get
better
loss
recovery,
but
we
need
to
have
a
smart
way
to
measure
the
actual
rtt
of
the
pass.
That
combines
half
delays.
One
way
and
the
other
and
a
comment
I
received
on
that
implementation-
is
that
it
it
requires
a
phd
to
understand
you
do
rtt
measurement.
J
Well,
I
don't
think
it
does,
but
then
I
do
have
another
phd.
So
who
knows
the
the
timestamp
implement
timestamp
option
fixes
that
very
cleanly,
because
I
mean,
if
you
have
a
timestamp
in
your
packets,
you
can
measure
each
half
delay
each
way
and
then
you
can.
The
implementation
becomes
completely
straightforward
and
it's
much
easier.
Now.
There
are
other
use
cases
for
timestamps.
I
mean
they
are
very
useful
if
you
do
delay-based
congestion
control
things
that
led
back,
for
example,
but
also
high
start.
J
a
very
simple
draft.
I
mean
it
just
defined
a
timestamp
for
m
with
a
type
and
the
timestamp
as
a
number
of
microseconds.
J
J
But
at
that
point
I
am
really
wondering
whether
we
should
pursue
this
or
not.
J
W
W
W
Let's
say:
firstly,
is
it
mandatory
for
the
sender
and
the
receiver
to
do
the
clocks
synchronization
if
they
have
to
let's
say
generates
their
own
time
step
or
is
the
timestamp
is
only
generated
by
the
server
itself.
W
If
the
time
step
is
only
generated
by
the
server
itself,
so
the
time
step
cannot
be
be
used
by
the
client,
but
to
tell
something
about
the
information
of
the
timeliness,
but
only
to
measure
less
the
the
value
delay
from
the
center
side.
W
J
Thank
you
for
the
suggestion.
I
think
that
at
a
high
level,
I
agree
with
you
that
we
should
have
a
discussion
inside
the
working
group
about
what
is
needed
and
get
feedback,
and
I
think
that's
one
of
the
reasons
like
to
see
some
adoption.
So
we
can
get
these
multiple
people
working
together.
J
We
had
feedback
actually
several
years
ago
that
there
is
that
the
frame
should
be
as
simple
as
possible,
and
the
document
should
only
specify
the
firm
that
the
mechanism,
because
the
mechanisms
will
be
the
combination
of
say,
that
frame
and
the
immersion,
firm,
etc,
but
I
mean
at
a
high
level.
Yes,
we
should
absolutely
have
a
discussion.
J
H
Yes,
ian
sweat,
google,
I
I
was
a
co-author
on
one
of
those
documents
that
you
mentioned
about
multiple
timestamps
or
timestamps
per
packet,
and
that
is
not
being
actively
pursued
by
us
right
now,
except
we.
We
do.
H
For
one
internal
project,
which
has
a
very
non-standard
congestion
controller
that
I
think
actually
get
published
at
some
point,
I
could
dig
it
up,
but
it's
so
it's
kind
of
in
this
weird
state
where
we
basically
just
published
it
just
so
it
was
kind
of
out
there,
but
it's
not
something
we're
actually
pursuing.
I
will
say
just
from
an
implementation
perspective,
it's
a
lot
more
convenient
to
like
be
able
to
curry
the
time
stamp
in
when
you
receive
an
acknowledgement.
H
Now,
there's
a
number
of
ways
of
like
making
that
happen
in
the
code
like
you
could
store
the
timestamp
and
always
have
it
received
right
before
the
act
or
something
like
that,
but
like
just
from
a
code
structure
perspective
having
the
timestamp
available,
even
if
it's
only
one
times
down
at
the
same
time
that
you're
processing
the
act
is,
is
super
convenient
from
a
code
structure
perspective?
But
I
guess
I
don't
think
you
should
give
up
on
this
work.
I
just
am
not.
J
G
Martin
do
google,
so
I
I
believe
I
understand
the
case
for
multipath,
but
I'm
not
sure
I
understand
the
justification
for
single
path
benefit.
I
mean
if,
if
you
detect
that
it
is
like
reverse
path,
act
congestion.
Would
you
like
send
more
aggressively
and
just
make
the
act
congestion
worse
or
what
is
the
response?
If
you
obtain
this
information
in
the
single
path
case,.
J
Well,
one
way
would
be
to
tune
the
frequency,
for
example,
okay,
but
also
the
the
other
way
would
be
to
understand
that
you
I
mean
you
you
can
avoid
use.
J
E
I
think
china's
maybe
having
some
audio
issues
again
I'll
dive
in
with
a
simple
question:
there's
some
talk
about
time,
synchronization
stuff
that
sounds
super
complicated,
like
I,
I
think,
starting
with
something
simple.
E
If,
if
we
were
to
do
this
work
at
all,
but
starting
something
simple,
wouldn't
preclude
more
complicated
stuff
coming
in
the
future,
we
have
space
for
extensions
and
frames
like
if
there's
some
usage
of
this
thing
go
that
way,
you're
sure
some
specifics
could
change,
but
I'm
seeing
a
few
use
cases-
and
this
is
without
a
chair
hat
on
with
the
chair
hat
on
I'd
really
like
to
to
understand
in
if
people
would
really
object
to
this
kind
of
work
like
we're.
F
Jay
collin,
I
I
think
that
there
are
some
good
use
cases
for
this.
You
can
apply
this
to
do
like
chirping
and
find
and
apply
some
of
the
path,
dispersion,
bandwidth,
detection
techniques.
With
this-
and
I
I
you
know,
do
we
need
it?
I
could
use
it.
I
think,
but
you
know
so,
I'm
broadly
supportive,
but
I
would,
I
guess,
like
to
see
some
development
in
the
use
cases,
maybe
to
kind
of
get
a
better
handle
on
on
other
ways.
People
anticipate
using
it,
but
yeah.
Q
John
linux-
I
mentioned
this
on
the
zulip,
but
I'll
repeat
it
here
the
I
think
this
would
be
useful
for
a
lot
of
the
real-time
media
cases,
all
the
or
most
of
the
rmcat
style
feedback
mechanisms,
most
notably
google
cc,
which
is
the
congestion
controller
and
the
webrtc
use
time
stamps
for
one-way
delay
to
measure
to
get
the
low
delay.
Real-Time
media
congestion.
U
Yoshinichida
from
amazon
web
service-
I'm
not
very
you
know
following
up
quick
discussion,
but
so
I'm
wondering
about
measuring
one
weight
rate
for
high
stats,
because
no
high
start
is
center
side
logic,
so
you
need
to
receive
back
anyhow,
so
unless
there
is
a
condition
on
the
arc
path,
I
don't
know
if
the
measuring
one-way
pass
is
that
useful?
Maybe
we
might
want
to
see
more
use
cases.
V
Spencer,
dawkins
christian,
am
I
understanding
your
your
ask
is
about
adoption
of
this
draft
in
the
working
group
is,
that
is
that
right.
V
E
Speaking
this
is
a
chair
with
my
hat
on.
I
think,
having
this
discussion
at
this
session
has
been
good.
Some
of
it
is
is
knowing
if
the
author
wants
to
seek
adoption
at
all
if
they
feel
ready.
I
I'm
seeing
generally
positive
comments
like
that.
This
could
be
useful,
especially
because
of
the
format,
and
that
they
could
apply
to
different
things,
but
then
a
few
people
saying
they
don't
quite
understand
the
use
cases
like.
E
I
think
what
probably
needs
to
happen
is
a
little
bit
more
follow-up
discussion
on
the
mailing
list
and
that
probably
I
need
to
confer
with
matt
as
well,
but
I
I'd
kind
of
be
leaning
on
issuing
an
adoption
call
just
to
see
to
flesh
out
people
who
would
strongly
object
to
adopting
that
work
right
now
and
and
the
reasons
why
and
therefore
you
know
if,
if,
if
those
reasons
are
things
that
christine
wouldn't
want
to
address,
then
he
can
he
can
go
away
and
maybe
in
the
future
somebody
else
can
come
and
pick
up
this
style
of
work.
V
Yeah,
yes,
it
does
so
the
other
thing
I'm
thinking
you
know,
I
think
jonathan
linux
and
I
were
headed
the
same
place
roughly,
which
is
as
we
are.
V
You
know,
like
avt
core
has
just
adopted
an
rtp
over
quick
dra
draft
and
we
had
a
media
over
quick
buff
this
week.
That
was
not
as
much
of
a
train
wreck
as
it
could
have
been.
V
You
know
if
if,
if
especially,
if
that
second
chunk
of
work
gets
chartered
between
now
and
november,
I
think
I
think
it
would
be
you
know
I
would
I
would.
I
would
like
to
see
an
adoption
call
go
out
sometime
between
you
know,
like
I
say
roughly
when
it
seems
right
to
the
chairs,
but
I
know
which
I
know
which
way
I
would
I
would
on
an
adoption
call,
but
do
the
right
thing,
of
course,.
E
E
J
Is
mentioning
that
they
should
be
work
on
the
variation
of
the
one-way
delay,
which
is
correct?
I
mean
all
the
measurements
of
delays
over
the
internet
are
uncertain
in
the
sense
that
you
don't
know
which
the
reference
is
exactly
so
you'd
never
measure
absolute
delay,
because
there
is
all
your
equations.
All
your
systems
of
the
question
end
up
on
the
terminate
and
you
have
to
make
an
arbitrary
choice
of
whether
the
delays
one
way
or
the
other
way.
J
What
you
do
measure
then,
are
the
variation
of
the
delay
and-
and
that
is
correct,
and
and
we
could
go
explaining
that
I
it's
kind
of
obvious
too,
but
the
kind
of
stuff
that
I
would
like
to
see
after
the
adoption,
because
clearly
that
could
be
done
by
a
pr
in
the
draft
and
a
discussion.
J
K
Wow,
okay,
awesome,
so
yeah.
I
agree
with
you.
I
look
clarify
this
because
I
think
this
is
a
point
of
contact.
Confusion
oftentimes
for
folks,
just
calling
it
one-way
delay
delta
is
it
makes
it
clearer.
However,
I
think
my
higher
order
point
here
is
that
I
think
the
draft
is
useful
because
it
allows
people
to
actually
have
a
mechanism
to
experiment
with
yeah,
but
in
in
in
without
actually
having
a
lot
of
people
asking
for
how.
For
for
usage
of
this
thing,
it'll
be
difficult
to
design
a
mechanism.
K
That's
going
to
be
worth
standardizing,
that's
my
opinion.
I
think
that
we
need
people
with
use
cases
before
we
can
actually
adopt
and
standardize
something.
I
E
At
time
for
this
item
I
could
say
we'll
we'll
go
away
and
do
some
discussion
with
various
people.
This
has
been
very
useful.
So
thank
you
all
for
the
comments.
Thank
you
christian.
Thank
you.
Thank
you.
E
Next
up
is
jake
holland,.
F
F
I
don't
have
any
means
to
run
the
slides
doing
oh
on
my
phone.
Is
that
the
way
I'm
supposed
to
do
it?
Could
you
bring
up
if
that's
possible.
F
Oh
great
yeah
hi,
I'm
jake
holland,
multicast
enthusiast,
I'm
here
to
talk
about
our
multicast
quick
proposal,
so
I'll
just
be
mainly
going
over
the
actual
proposal.
If
you
want
to
see
why
I
went
over
that
there
was
a
barb
off
at
the
remote
111
and
there's
links
to
that
and
the
slides
there.
We
also
went
over
a
little
bit
at
that
went.
F
Bit
of
it
in
sec
dispatch
at
112.,
this
proposal
is
largely
a
follow
follow-up
to
that
sick
dispatch.
112
discussion,
where
some
of
the
feedback
we
got
is
that
we
needed
a
specific
protocol
proposal
for
the
security
characteristics
we
were
looking
to
get
with
our
multicast
security
consideration
stock
that
tried
to
go
over
the
security
considerations
for
using
multicast
to
deliver
web
traffic
specifically,
but
also
for
generalized
non-web
traffic
delivered
over
multicast.
F
So
that's
the
well!
I
won't
go
in
depth,
but
I'll
just
try
to
give
a
kind
of
a
really
basic
overview
next
slide.
Please.
F
This
is
anchored
to
a
unicast
connection,
so
there's
always
a
unicast
connection
between
a
server
and
a
client,
and
then
the
server
can
tell
the
client,
by
the
way,
some
of
the
data
that
I'm
trying
to
send
you
the
way
you
should
get
it
is.
You
should
join
this
multicast
session.
You'll
see
some
1rtt
quick
packets.
There
here
are
their
keys
to
decode.
Obviously,
since
it's
multicast,
the
packets
are
identical
for
every
receiver.
These
are
shared
symmetric
keys.
F
That's
one
of
the
things
we
get
into
in
the
in
the
security
consideration
stack:
that's
essentially
the
security
consideration
section
for
this
document
and
because
they
are
widely
distributed,
shared
keys.
We
also
have
separate
integrity,
packets
that
are
anchored
on
the
unicast
connection.
To
that
mean
that,
even
though
the
keys
are
shared,
the
packets
cannot
be
spoofed,
they
must
be
proven
to
have
been
sent
by
the
actual
server
that
you're
talking
to
the
server
controls
the
entire
sort
of
multicast
listening
process.
F
F
F
It
tells
the
client
what
to
do
right
and
the
client
sends
ax
for
all
the
packets
that
it
receives,
and
this
is
very
similar
in
a
lot
of
ways
to
the
multi-path
work.
The
the
packets
are
just
interpreted
as
a
part
of
the
unicast
connection
that
you've
got
it's
anchored
on
a
client
on
a
channel
id
rather
than
a
connection
id,
but
that's
essentially
just
a
layer
of
indirection
to
the
connection
id
and
the
reason
for
that
is
basically
that
again,
it's
shared
identical
packets
that
are
sent
to
many.
F
F
So
the
the
multicast
channels
are
only
for
server
to
client
data
only
for
server
initiated
streams.
Essentially
it
can
be
used
for
datagrams
as
well,
not
so
much
for
web
traffic,
because
h3
datagrams
have
a
have
a
client
chosen
id
inside
them,
but
for
non-web
datagrams.
It
still
should
be
possible
for
server
initiated
streams.
It
also,
we
think,
is
possible
and
the
packets
are
just
interpreted
in
the
context
of
that
connection.
F
The
the
of
the
unicast
connection
that
you
already
have
yeah
any
any
questions
about
any
of
this
I'd
be
happy
to
go
over.
But,
let's
just
let's
just
move
on
for
time
here
again
for
discussion.
Yeah
next
slide
for
discussion,
so
we're
working
on
an
implementation.
This
is
in
conjunction
with
several
members
of
the
w3
multicast
community
w3c
multicast
community
group.
F
We've
been
basing
it
so
far
on
the
on
the
google
quiche
implementation
working
toward
a
demo
that
could
run
in
a
browser,
but
you
know
we
just
want
to
prove
to
ourselves
that
it
will
work
essentially,
but
we
think
that
it
will,
in
terms
of
implementation
status
or
the
the
maturity
the
of
the
spec
and
what
we
think
we
solved.
We
don't
know
of
any
reasons
that
this
does
not
match
the
security
considerations
document,
but
we
do
think
for
web
traffic.
F
We've
recently
noticed
that
we
probably
need
to
include
something
that
more
strictly
enforces
the
origin
policy
or
allows
the
the
browser
to
enforce
origin
policy
on
the
multicast
packets
that
are
received.
We
do
think
this
is
possible.
We
have
a
couple
ways
to
do
it.
That's
not
yet
part
of
the
draft,
but
just
by
because
the
packets
are
integrity,
protected
and
can't
be
spoofed.
F
We
think
that,
for
example,
we
could
include
an
origin
frame
in
the
multicast
packets,
and
this
would
give
the
browser
the
information
it
needs
to
avoid,
including
anything
for
a
wrong
origin
in
the
data,
that's
processing.
F
We
have
a
number
of
protocol
extensions
and
the
reasons
are
basically
all
just
driven
by
you
know
we're
asking
for
basically
ten
new
frames
at
this
point.
We
outline
them
in
the
draft
we
have.
You
know
this
is
part
of
the
implementation.
F
That's
that's
working
there,
but
there's
essentially
just
management
of
the
multicast
channels
in
the
unicast
stream,
where
the
server
tells
the
client
about
them,
management
of
the
client
state
and
then
the
acts
that
are
associated
with
it,
plus
the
integrity,
guarantees
that
require
a
separate
integrity
path
and
we
can
go
into
the
details
of
all
this
I'd
be
happy
to
talk
offline
as
well.
F
We
probably
don't
have
time
for
as
detailed
a
discussion
as
what
we
saw
in
some
of
the
more
standardized
pieces
earlier
in
the
session,
but
next
slide.
F
The
basic
question
I've
got
is:
is
this
interesting
to
anyone?
We
think
that
we
can
solve
a
big
scale
problem
here.
We
certainly
would
like
to
deploy
this
we're
interested
in
deploying
it
non-browser
at
first,
because
we've
had
sort
of
a
skepticism,
I
think,
from
the
browser
community
so
far,
but
from
the
isp
community.
We
have,
you
know
some
consensus
that
the
scaling
problems
need
to
be
solved.
There's
too
many
there's
too
many
events
that
are
sort
of
exceeding
the
capacity
of
the
networks
to
deliver
when
there's
popular
content.
F
This
is
primarily
driven
by
by
popular
sports
and
by
large
downloads.
Most
large
downloads
would
not
be
web
would
not
be
web
traffic,
but
the
popular
sports
often
is
particularly
when
you
consider
that
a
lot
of
the
smart
tvs
basically
are
using
web
apis
underneath
something
like
60
plus
percent
growing
last
I
heard,
but
you
know,
in
addition,
various
forms
of
you
know
browser,
apps
and
and
a
lot
of
the
clients
actually
using
browsers
inside
as
well
so
yeah.
F
What
do
you
recommend?
We
should
do
before
asking
for
adoption,
or
you
know
where,
should
this
go
and
we'll
be
discussing
this
more
in
mbo
indian
next
session,
by
the
way
so
come
there
if
you're
interested,
we
can
get
into
it
more.
X
Alex
hi
alex
ranowski
google.
I
was
looking
over
the
slides
before
you're
talking.
I
mostly
had
the
observation
that
I
feel
like
what
you're
proposing
is
not
something
that
is
in
quick,
but
something
that
could
be
built
on
top
of
either
tcp
or
quick
with
the
type
of
control
channel
stuff
that
you're
doing
so.
My
question
is
really
is
quick.
The
right
place
to
do
this,
or
is
this
a
separate
protocol
for
here,
is
how
I
want
to
join
a
multicast
thing
and
a
separate
framing
layer
for
here
is
the
multicast
available
data.
F
Well,
we're
looking
for
some
frames
in
the
in
the
iana
registries
for
quick,
you
know:
does
it
have
to
be
quick?
No,
I
mean
you
can
have
different
multicast
protocols,
that's
possible.
We
could
do
something
that
has
the
same
security
properties.
That's
not
quick
or
we
could
have
our
own
proprietary
extension
if
we're
going
to
be
shipping
fat
clients,
but
we
would
like
to
aim
toward
an
eventual
something
that
can
eventually
be
included
in
browsers,
so
that's
kind
of
where
I'm
coming
from
with
the
quick
effort,
thanks.
O
Hey
larry
zachary,
so
I
I
like
to
select
this
already
when
lucas
came
up
with
it
a
few
years
ago
and
I'm
glad
to
see
that
it's
sort
of
being
progressing
because
we
all
had
this
gap
in
transport
right
between
the
unicast
and
the
multicast,
and
we
could
never
sort
of
figure
out
how
can
actually
leverage
the
power
that
multicast
potentially
offers
more
easily
for
applications
and
for
you
know,
users,
and
then
this
goes
in
that
direction,
which
is
sort
of
kind
of
exciting.
O
I
don't
want
to
speak
about
the
adoption
call
but
sort
of
in
terms
of
what
I
would
see
do
you
think
about
this
sort
of
how
general
a
mechanism
this
could
be.
I
mean
there's
some
obvious
use
cases
right
that
are
motivating
this,
but
this
is
sort
of
the
first
time
that
we
have
an
approach
that
mixes
unicast
or
multicast
pretty
easily
and
efficiently.
It
would
be
great
if
it
would
be
usable
for
lots
of
stuff.
Thank
you.
F
Yeah,
that's
a
good
question.
I
guess
I
would
point
to
the
the
warp
and
rush
proposals
we
think
we
can
just
do,
as
is
because
these
use
server
initiated
data
for
pushing
from
the
server
side
so
that
that's
the
demo
I'd
like
to
run.
You
know
inside
a
custom.
Browser
is
just
use
web
transport
with
the
existing
thing
and
it
doesn't
have
to
change
the
application
at
all.
I
think
that
you
know
server
push
based,
hls
or
dash
could
do
the
same
thing.
F
Although
server
push,
I
gather
is
being
removed
from
the
browsers.
Unfortunately,
so
I'm
not
sure
that'll
be
any
better
in
the.
A
F
A
Mike
bishop,
I'm
a
little
surprised
that
you're
putting
quick
frames
on
the
multicast
stream,
I
would
have-
would
have
thought
the
multicast
stream
would
just
be
raw
data
delivery
and
then
the
frames
on
the
unicast
quick
session
tell
you
what
to
do
with
it.
Basically.
F
F
F
F
Well,
it's
an
alternate
path
for
the
multi.
You
know
the
the
the
multicast
the
multicast
channel
is
just
a
an
alternate
network
path
that
carries
the
same
data
so
once
you've
decoded
it,
then
you
merge
it
as
part
of
the
same
connection
for
your
packet
processing
and
you
can
have
almost
all
the
same
behavior.
It's
just
a
you
know:
yeah.
There
are
some
differences
in
like
flow
control
and
congestion
control.
F
There's
some
differences
in
the
the
connection
id
versus
the
channel
id,
but
most
of
the
packet
processing
is
actually
the
same
and
and
works
out
pretty
well.
E
Chair
interrupt
here
we're
we're
over
time
now.
I'd
like
to
thank
everyone,
jake,
sorry
to
interrupt
you,
but
I
think
we
got
some
feedback
there,
which
is
kind
of
good
I'd,
encourage
hallway
discussion
for
anyone
that
didn't
make
the
cut.
We
had
one
more
talk
from
emil,
which
we
didn't
make
time
for
I
apologize
emil.
This
was
in
relation
to
a
side
meeting
that
was
held
yesterday.
I
believe
some
of
the
participants
of
the
quick
working
group
were
there.
I
I
wasn't
able
to
attend.
E
Unfortunately,
my
understanding
is
this
relates
to
quick,
observability,
tooling
visualization
analysis.
My
kind
of
chair
comment
with
a
hat
on
is
that's
work.
That
is
completely
related
to
quick
and
I'd
love
to
see
happen
in
the
quick
working
group
where
it
makes
sense.
So
I'd
encourage
that
kind
of
thing
to
be
proposed
that
the
next
quick
agenda,
if
it
makes
sense-
and
please
speak
to
the
chairs
about
that
in
in
future-
and
we
can
have
that
discussion
a
bit
earlier-
maybe
that
would
be
great.
E
Nope.
Okay.
On
that
note,
I'd
like
to
thank
our
jobscribe
and
notetakers
very
much
I'd
like
to
thank
zahed
for
being
on
the
ground
and
helping
with
the
delegation
of
responsibilities,
and
everyone
have
a
great
rest
of
your
week
and
a
good
lunch.