►
From YouTube: QUIC WG Interim Meeting, 2020-10-22
Description
QUIC WG Interim Meeting, 2020-10-22
B
I
can
webex
cunningly
put
the
stop
sharing
button
over
the
unmute
button,
yeah,
so
apparently
the
lobby.
So
I
guess
the
lobby
works.
B
Remember
enabling
it,
but
basically
we
basically
have
to
let
people
in
at
some
point
or
we
can
also
turn
off
the
lobby
at
some
point,
then.
A
B
B
B
A
B
Evening
we,
we
still
have
people
waiting
in
the
lobby
for
a
little
while
longer,
but
we
put
you
in
now
in
case
there's
anything
we
wanted
to
chat
about
beforehand.
Yeah.
C
C
B
B
Yeah
video,
I
usually
forget
to
do
the
recording
how
to
start
the
recording,
which
is
why
I
auto
started
and
then
cut
it
later,
a
bit
cheap.
Now,
okay,.
D
B
Basically,
sort
of
the
the
stuff
is
on
the
data
tracker.
I
uploaded
the
materials
earlier
on
the
agenda
and
I'm
gonna
try
and
keep
people
to
q
a
on
clarifications
during
the
presentations,
and
then
we
have
a
hopefully
long
enough
slot.
In
the
end.
C
B
C
Hopefully,
that's
yeah,
I
think
it's.
I
think
you
have
to
keep
the
percentage
short
also.
C
We
have
to
time
them
a
little
bit
yes,
so
because
otherwise
it's
gonna
be
more
than
the
five
minutes,
plus
five
minutes
for
clarifications.
B
Yeah,
do
you
so,
I
think
we're
gonna?
Should
we
try
and
do
the
use,
the
the
webex
chat
lucas
or
do
you
wanna
do
jabber,
which
not
everybody
has.
A
I
mean
I'd,
encourage
people
to
to
speak
verbally,
but
use
the
webex
chat
just
for
key
management.
Okay,
do
people
want
to
really
make
written
comments
that.
B
Well,
people
typically
chat
on
the
site
somewhere
and
so.
A
B
You
and
undefined
yeah,
let's
see
if
our
java
channel
or
now
works
in
the
slack,
no,
the
the
bridge
and
not
that
he
fixed
it.
E
D
B
F
So
if
we
actually
have
a
lobby
which
I
didn't
have
when
I
joined
this
is
because
you
selected
the
wrong
webex
meeting
type
when
you
scheduled
the
meeting.
D
B
B
Yeah,
if
you
let
people
in
I'll
start
talking,
so
so
welcome
folks
to
the
quick
working
group
interim
meeting,
the
first
one
after
our
main
draft
space
drafts
are
in
itf
last
call.
So
that's
a
round
of
applause,
and
normally
we
probably
have
some
drinks
now
or
or
yesterday,
but
it
is
what
it
is
glad
you
could
all
make
it.
There's
still
people
coming
in,
but
you're
not
missing
much.
The
first
thing
I'm
going
to
show
you
you
should
already
know,
which
is
the
ietf
note.
B
Well,
there's
a
lot
of
text
here,
I've
sort
of
highlighted
the
stuff
that
I
think
is
most
important,
which
is
you
should
really
read:
bcp
79,
if
you're
participating
here
and
if
you're,
making
contributions
and
if
you
don't
know
what
the
terms
contribution
and
participation
are
defined
to
be.
You
should
definitely
read
bcp
79..
B
B
The
ipr
policy
is
is
arguably
the
most
important
one,
which
is
that
if
you
make
a
contribution,
that's
covered
by
patents
or
patent
applications
that
are
controlled
by
you
or
your
sponsor,
you
must
disclose
that
or
you
must
not
participate
in
the
discussion
again,
for
what
participate
means
means
you
need
to
read
79
and
you
agree
to
be
civil
in
your
participation
and
work
respectfully
with
others
that
should
not
come
as
a
surprise.
As
I
said,
we
have
very
little
administration.
B
B
Him
and
if
you
are
willing
to
be
a
second,
if
you
type
that
in
the
chat,
so
we
know
also,
everybody
is
obviously
welcome
to
go
to
this
kodi
md
link.
That
has
also
just
been
pasted
into
this,
like
it's
also
linked
from
the
agenda,
and
you
can
help
robin
out
there
there's
a
question
about
blue
sheets.
I
was
brave
enough
to
rely
on
the
webex
registration
feature
for
this,
which
I
have
never
used
before.
B
I
will,
when
the
first
presentation
starts,
go
and
look
and
see
what
what
data
that
gives
me.
I
asked
it
to
collect
names
and
affiliations.
Well,
we'll
see
if
it
actually
did
that
if
there's
a
problem
I'll
do
an
interrupt
at
some
point
and
we
do
the
blue
sheet
inside
the
cody
md
or
something.
But
for
now
I'm
trusting
in
webex
for
the
chat
lucas
is
going
to
monitor
the
queue
and
manage
the
queue.
B
So
we
would
like
to
use
the
webex
chat
for
queue
management
exclusively
so
that
things
don't
get
lost
there
and
if
you
feel
like
you
want
to
chat,
we
have
an
ietf
jabber
room
that
some
people
are
in,
but
but
not
very
many
actually
and
unfortunately,
the
bridge
we
had
between
the
slack
jabber
channel
and
the
xmpp
group
on
the
itf
jabber
server
seems
to
be
broken.
I
know
if
mnot
is
here
or
not,
but
if
you
are
here
mark,
can
you
kick
that
proxy
into
submission
so
that
hopefully
stuff
gets
relayed.
K
B
B
B
B
I
guess
it's
just
you
robin
feel
free
to
interrupt
people
if
you
need
them
to
slow
down
or
repeat
something
since
you
are
on
your
own.
Unfortunately,.
B
B
Right,
the
agenda
is
relatively
simple:
we
are
at
bullet
one,
I'm
going
to
talk
a
little
about
scoping
right
after
this
we
asked
the
mask
working
group
chairs
in
in
the
last
day
or
two,
if,
if
or
somebody
from
ask
if
they
would
be
available
to
give
an
overview.
B
In
addition
to
what
I'll
briefly
be
talking
about
just
with
quick
offers
and
with
those
two
sort
of
overview
overviews
in
our
heads,
we
can
then
go
and
look
at
the
use
cases
and
requirements.
We
have
six
of
those.
We
ask
people
to
be
very
brief.
B
We're
gonna
try
and
do
the
presentation
in
five
minutes
and
leave
five
minutes
for
q,
a
we're
going
to
try
and
keep
the
q
and
a
during
that
part
of
the
agenda
on
clarification
only
and
we
have
40
minutes
on
open
discussions
at
the
end
where
we
can
have
like
a
broader
and
deeper
discussion
on
things.
B
I
guess
not
right,
so
the
point
of
the
meeting
is
to
talk
about
multipath
in
quick
and
and
this
sort
of
ian
sort
of
suggested
that
we
have
this
meeting
after
we
had
some
exchanges
on
the
mailing
list.
Where
you
know
a
few
people
who
maybe
don't
hang
out
on
the
implementer
slack
all
day
long
were
surprised
that
at
least
some
part
of
the
implementation
community
had
sort
of
maybe
moved
away
from
our
initial
belief.
That
multipath
was
something
that
we
definitely
wanted
to
do
for
quick.
B
That
was
the
state.
Three
years
ago,
when
we
wrote
the
charter,
google
at
the
time
had
been
experimenting
with
multipath
and
were
still
pretty
positive
about
it,
and
so
it
seemed
like
the
right
thing
to
do
to
just
you
know
put
that
into
the
initial
charter
of
quick
three
years
later,
at
least
for
some
people.
The
support
that
itf
quick
has
for
connection
migration
and
from
a
migration
to
a
preferred
server
address
hits
most
of
their
use
cases,
and
so
so
some
people
have
sort
of
questioned.
B
Whether
multipath
is
something
that
quick
should
do.
Others
remain
very
much
interested
in
doing
it,
and
so
we
wanted
to
give
people
time
to
talk
about
multipath,
specifically
for
a
little
a
little
bit
longer
and
in
a
bit
more
detail,
and
then
we
see
what
that
means
for
us
going
forward
specifically
for
our
meeting
at
itf
109..
B
B
I
I
guess,
in
the
absence
of
anything
else,
important
happening,
we
would
use
the
remainder
of
any
time
for
multipath,
because
I
think
that's
at
the
moment,
the
other
big
topic,
but
there's
also
the
ops
drafts
and
various
extension
drafts
that
probably
might
have
seen
updates
by
then
so
we'll
see
what
we
do,
but
we
we
wanted
to
talk
about
multipath
now,
and
so
that's
why
we're
here
right.
So
so,
quick
as
you
all
know,
I
would
assume,
is
an
end-to-end
protocol.
It's
encrypted.
B
There's
a
client
and
a
server
and
quick
has
some
support
for
using
multiple
network
paths,
but
they're
not
very
powerful.
Specifically,
a
server
can
tell
the
client
during
the
tls
handshake
with
the
transfer
parameter.
What
its
preferred
addresses
are
there
for
v4
and
v6
that
the
client
should
be
migrating
to
so
that's
a
way
for
the
server
side
to
make
the
client
migrate
well
to
to
suggest
to
the
client
that
they
really
should
migrate
to
another
address.
B
The
the
other
option
on
the
client
side
is
a
migration
that
can
either
be
passive,
meaning
and
that
rebinds
or
something
that
the
client
might
not
even
know
or
the
client,
can
actively
use
a
different
ip
address
and
port
and
and
connection
id
to
talk
to
the
server
from
an
from
another
endpoint
and
there's
this
path,
validation.
That
happens
then,
and
the
client
can
do
this
multiple
times.
B
So
those
are
the
sort
of
two
mechanisms
that
quick
has
and
they
allow
you
not
parallel
use
of
multiple
paths,
but
they
allow
you
to
sort
of
quote
unquote
some
sort
of
failover
capabilities.
B
Although
you
can
make
argument
that
the
server
side
is
not
very
strong
right,
so
there's
something
in
quick
on
on
failing
over,
which
is
is
helpful.
The
multipath
and
multipath
tcp
offer
much
more
than
that.
Specifically,
they
offer
a
tcp
connection
to
use
multiple
network
paths
at
the
same
time
for
a
single
connection
and
that
sort
of
it's
not
failover,
but
it's
pooling
the
capacity
and
using
the
aggregate
capacity
of
those
paths
at
the
same
time
and
that's
something
that
quick
version
one
as
we've
specified
it
now.
B
It
does
not
have-
and
I
think
when
the
charter
was
written,
that
was
sort
of
the
desired
edition
that
we
wanted
to
to
talk
about
when
on
this
on
this
multipath
milestone.
But
it
was
never
really
clearly
defined
and
therefore
going
into
the
use
cases.
B
Some
of
the
use
cases
are
much
more
focused
on
on
the
failover
part
and
others
are
maybe
much
more
focused
on
the
capacity
pooling
part
and
it's
useful
to
sort
of
understand
that
the
mechanisms
look
similar
but
they're
actually
different
and
your
failover
might
be
enough
for
one
use
case.
While
another
really
requires
this
pooling
of
capacity.
B
Okay
with
that,
I
think
we're
gonna,
try
and
then
go
to
the
to
the
mask
overview
that
talks
about
using
quick
as
a
tunneling
protocol,
and
I
would
prefer
if
people
could
run
from
their
own
machine.
I
forgot
to
say
that
earlier
on
a
pass
through
the
presenter
ball.
If
you
can't,
let
me
know
when
I
can
show
the
slides.
L
Lars
david's
gonna
be
here
presenting
for
mask.
I
can't
I'm
presenting
or
I'm
watching
this
in
the
web,
and
I
don't
think
you
can
do
that
from
the
web
client.
So
if
you
don't
mind
it's
only
six
slides,
it
would
be
helpful
if
you
didn't
mind
driving.
B
B
Okay,
I
want
to
share
something
else.
M
While
you're
working
on
that
lars
I'm
gonna
have
to,
I
have
a
hard
stop
an
hour
into
the
meeting,
and
I
would
love
to
get
a
few
words
and
a
couple
thoughts
in
before.
I
have
to
go
at
some
point.
B
Okay
sounds
good.
Thank
you.
Is
this
visible
or
should
I
go
full
screen
if
I
go
full
screen,
I
can't
see
stuff
anymore
in
the
browser.
L
Oh
then
it's
it
can
be.
I
think
it's
legible
yeah.
A
L
All
right:
okay,
five
five
minutes
here
I
come
all
right,
so
hi
everyone,
my
name
is
david
scanazzi.
I
work
on
chrome,
but
most
of
you
already
know
me
as
a
quick
enthusiast
and
also
as
a
mask
enthusiast.
So
I'm
here
today,
just
as
lars
was
saying
to
quickly
explain
what
mask
is
and
how
it
might
be
relevant
to
multi-path
next
slide.
Please.
L
Http
has
supported
connect
for
a
long
time
and
that's
great
for
proxying
tcp
but
like
the
internet
is
not
just
tcp
and
especially
when
you
had
quicken
that
becomes
quite
limiting
so
mask
is
a
newly
formed
ietf
working
group,
which
we
formed
this
year
with
the
goal
of
adding
new
options
for
proxying
over
http
3..
L
So
how
do
we
have
things
that
are
maybe
a
little
bit
like
connect,
but
but
different
or
more
powerful
or
for
more
use
cases,
and
just
to
I'm
not
gonna,
go
read
the
entire
charter,
but
one
of
the
things
that
were
that
are
explicit
in
the
mask
charter
is
that
multipath
itself,
and
also
the
discovery
of
mass
processes,
are
out
of
scope
for
for
mask
I'll,
go
into
more
detail
a
little
bit
about
those
later.
L
L
What
does
the
mask
working
group
do
concretely
today
because
initially
like
there
were
various
different
proposals
for
mask
and
what
we
landed
on
as
current
work
items
are
the
first
one
is
connect
udp,
so
take
the
connect,
you
know
and
love,
and
instead
of
tcp
you
do
it
for
udp.
That's
kind
of
all.
There
is
to
it.
L
The
slightly
interesting
bit
is
that
when
you're
running
over
http
3,
you
can
use
datagram
frames,
which
are
an
extension
to
quick,
not
part
of
the
quickv1
core
protocol,
but
they're
an
extension
that
allows
you
to
send
little
bite-size
pieces
of
data
that
are
not
re-transmitted.
So
that's
exactly
what
you
want
for
something
like
connect
udp,
because
it
means
that
if
that
packet
is
lost
on
the
client
to
mask
link,
it
doesn't
get
retransmitted
by
that.
L
So,
if
you
were
to
run
let's
say
quick
over
connect
udp,
you
don't
end
up
with
nested
loss
recovery
loops,
which
can
often
be
inefficient.
L
So
what
we
have
right
now
in
the
mask
working
group
is:
we've
adopted
a
document
to
discuss
requirements
for
vpn
over
http
3.,
the
goal
being
that
once
we
agree
on
the
requirements,
we're
still
building
a
solution
which
could
end
up
looking
like
a
method
called
connect,
ip
or
something
different,
and
then
something
that
came
up
lately
that
is
not
adopted
yet
is
so
mainly
written
by
tommy
paulie
from
apple
and
also
myself
is
an
extension
to
connect
udp.
L
L
L
L
L
But
in
a
hypothetical
world
where
multipath
quick
exists,
ma
you
could
send
on
both
paths.
But
one
thing
that
I
want
to
point
out
is
that
this
is
very
orthogonal
to
mask
because,
let's
say,
for
example,
if
your
end-to-end
traffic
shown
in
red
here
on
the
diagram
is
you
know
over
tcp?
So
let's
say
you
know
you're
you.
You
just
told
your
client
to
the
proxy
hey.
I
want
to
do
connect
and
then
you
did
to
the
web
server
on
443
and
then
you
did
an
end-to-end
tls
and
then
you
have
http.
L
On
top
of
that,
then,
like
there's,
no
mask
involved
whatsoever,
there,
that's
something
that's
a
vanilla,
http3
and
that
could
benefit
potentially
for
multi
for
multi-path.
Similarly,
connect
udp
could
but
kind
of
the
thinking
there
is
you
don't
need
to
change
mask
to
get
multi-path
you?
If,
if
quick
has
it,
then
mass
gets
it
for
free.
So
there
are
kind
of
orthogonal
efforts
that
can
benefit
from
one
another,
but
you
don't
really
need
to
change
mask
for
this
or
any
of
the
like
documents
discussed
in
mask
next
slide.
L
Please
and
another
example
of
a
way
that
multipath
can
interact
with
mask
is:
let's
is
end-to-end
multi-path.
So
let's
say
you
have
an
http
3
connection
between
your
client
and
your
target
web
server.
You
could
have,
for
example,
one
leg
going
direct
over
wi-fi
and
one
leg
going
over
a
mask
connection
to
a
proxy,
so
that
could
be
used.
L
For
example,
if
you
have
one
network
that
you
trust,
you
can
go
end-to-end
and
another
one
that
you
really
don't
trust
or
you
want
an
extra
layer
of
obfuscation
and
you
use
mask
for
that
or
for
other
reasons
in
this
case.
Similarly,
it's
also
pretty
much
orthogonal
to
mask,
because,
from
the
perspective
of
mask
of
the
connection
between
the
client
and
the
mask
server,
it's
just
shoveling
encrypted
packets
back
and
forth.
L
It
doesn't
really
care
that
you
have
that
other
leg
on
the
side
over
wi-fi,
it
doesn't
even
know
about
it
and
it
doesn't
need
to
so
kind
of.
Similarly,
this
could
benefit
for
multi-bath
and
mass
can
benefit
from
each
other,
but
there's
no
direct
link
for
the
two
to
interact
directly
and
next
slide.
Please
that's
it.
That's
kind
of
all
I
had
does
anyone
have
any
questions.
N
Thank
you
for
thank
you
for
the
presentation
I
learned
more
than
you
maybe
thought
I
would
have
do
you
have
a
sense
right
now
of
because
I
know
this
was
a
topic
and
mask
adoption
for
the
ipconnect,
whether
that's
leaning
more
towards
an
ip
address
or
an
ip
prefix.
Do
you
understand
my
question.
L
I
I
think
I
do
that's
so
that
is
a
question
that
we
will
resolve
in
the
mass
cooking
group
as
part
of
our.
O
L
The
adopted
ip
requirements
draft,
I
believe
that
we
have
not
yet
answered
this
question.
I
have.
L
What
I
think
yeah,
but
I
I
would
take
all
that
conversation
to
mask
because
it's
not
really
relevant
to
multi-path.
N
Yeah,
I
absolutely
thank
you.
I
just
wanted
to
make
sure
I
was
as
current
as
you
were.
D
Christian
vitamin
david,
you
say
that
multipass
is
orthogonal
to
mask,
but
there
is
a
point
in
which
multi-path
and
tunneling
interact,
which
is
pass
mtu.
That
is
any
application
that
runs
over
vpn
or
udp
proxy.
We'll
have
to
do
pass
mtu
and
arguably
multipass
means
that
we
have
some
packet
with
one
mtu
and
some
packet
with
another.
L
So
the
way
I
personally
think
about
it
is
that's
awful
orthogonal
to
mask,
because
even
let's
see
if
there
is
no
that's
specific
to
quick,
even
if
there
is
no
mask
involved
and
no
multipath
involved,
I
still
need
to
do
path
mtu
to
know
how
how
many
bytes
I
can
fit
per
packet
on
my
wi-fi
interface,
and
if
I
could
do
a
connection
migration
to
sell,
I
need
to
redo
path,
mtu
discovery
on
that
after
that
migration
on
the
new
interface.
D
Well,
what
I'm
saying
david
is
that
if
you
do
pass
mtu
discovery
in
quick,
you
do
that
we
have
the
mechanisms
to
do
that
on
each
pass.
But
if
you
do
that
on
the
aggregate
of
several
passive,
if
someone's
saying
underneath
aggregates
several
paths
to
the
point
that
sometimes
you
can
send
1500
bytes
and
sometimes
you
can
send
1200,
then
quick
have
a
very
hard
time
dealing
with
that.
D
L
That
that's
a
good
point.
I
I
think
that
also
applies
to
ip,
because
your
ip
route
could
change.
You
know
over
the
network
without
you
having
any
knowledge
of
it
and
could
flap
between
things,
but
I
agree
with
you
that
that
is
less
likely
that,
whereas,
when
you
have
a
proxy
that
is
deliberately
connection
migrating
or
something
it
could
have,
it
could
flap
more
often
my
intuition
is
to
like
maybe
start
off
with
something
without
it
and
see.
B
P
You
hear
me:
okay,
thank
you.
Jana
ingar,
thanks
for
the
presentation,
david.
So
one
question:
that's
not
clear
to
me.
What
you've
shown
here
is
that
mask
can
use
multipath
and
then
multipath
in
mask,
as
christian
is
pointing
out,
might
have
some
interactions,
but
we
can
look
on
this.
The
question
that
I
guess
I'm
trying
to
square
away
in
my
head
is:
why
would
mask
need
multipath?
This
is
more
of
an
application.
P
B
L
L
I'll
try
to
be
quick.
So
absolutely
I
agree
with
both
of
you.
I'm
personally
not
yet
convinced
that
mask
would
be
measurably
better
with
multipath,
but
I
think
part
of
the
presentations
that
will
follow
this
one.
P
Okay,
I
appreciate
that
I
was
trying
I
I
guess
I
I
had
the
wrong
frame
in
my
head
when
I
was
looking
at
the
presentation
that
helps
yeah.
B
There's
a
bunch
of
use
cases
that
will
follow
now
shortly
that
that
basically
use
quick
as
a
tunnel
and
and
since
mask
is
defining
something
like
that.
We
thought
it
would
be
useful
to
have
david
present
what
mask
offers.
At
the
moment.
B
M
Thanks
and
I
really
regret
having
to
leave
by
the
way,
so
the
basic
observation
I'm
going
to
make
is:
why
is
the
connection
the
right
place
for
multipath?
M
I
think
that
historically,
we've
chosen
the
connection
as
the
place
for
multipath
because
of
two
reasons:
number
one.
We
don't
want
to
change
a
bunch
of
applications
and
have
to
require
them
to
be
aware
all
the
time
so
putting
it
down
in
a
layer
that
is
traditionally
opaque
has
been
useful
and
number
two,
because
the
connection
has
just
kind
of
evolved
to
be
a
proxy
for
the
session
that
the
application
is
trying
to
do,
and
I
think,
as
a
historical
accident.
That's
all
that's
all
fine.
I
mean
things
evolve.
M
M
M
If
what
I
have
instead
is
something
that
gets
me
an
ability
to
route
to
the
same
server,
to
establish
a
session
that
both
the
client
and
the
server
understand,
irrespective
of
whether
whether
it
is
the
same
connection
or
not
that
it
is
the
same
session
and
you
can
still
put
this
behind
an
opaque
barrier
and
call
it
a
connection
if
you
wish,
but
I
believe
that
having
that
option
to
have
multiple
connections
and
establish
a
session
is
fundamentally
more
powerful
in
the
end,
it
allows
us
to
do
things
with
applications
that
the
applications
would
probably
be
more
likely
to
do,
and
that's
it
that's
all
I
have.
M
You
know,
multipath
is
about
muxing
and
demuxing,
like
other
things,
it's
just
that
we're
mixing
and
demuxing
onto
connections,
and
do
we
want
sorry
onto
paths
and
do
we
want
those
to
be
associated
with
the
connection
or
somewhere
higher,
and
that's
it?
Thank
you.
B
All
right
with
that
we're
going
to
start
on
the
use
cases
and
we
thought
historically,
since
chromium,
has
experimented
with
this.
We
would
take
that
presentation.
First,
I
think
it's
fun
and
china
together
right
I'll
run
it
from
here.
I
think.
P
Thanks
lars
I'm
sorry,
janae
young
I'll,
I'm
just
gonna
take
like
two
seconds
to
introduce
fans,
so
this
is
a
presentation
that
young
at
google
is,
is
going
through
going
to
go
through
his
so
fan
implemented,
multipath
in
chrome.
This
was
a
few
years
ago
and,
and
he
drove
the
the
implementation,
and
the
design,
however,
is
some
of
what
he's
going
to
go
through.
P
The
the
point
of
this
presentation
is
not
so
much
to
talk
about
the
design,
but
to
talk
about
the
experience
of
having
gone
through
this
in
quick
once
now.
Of
course,
the
experiences
will
vary,
as
implementations
tend
to
be
different
and
the
design
here
is
only
one.
All
of
this
is
just
one
data
point
in
the
space
of
designing
and
implementing
multipath
in
quick,
but
it's
relevant.
Therefore,
it's
here
I'm
going
to
hand
it
off
to
fan
thanks
man.
R
Thank
you,
jenna
yeah,
I'm
work,
working,
google,
quick
team-
and
this
is
the
multi-pass
work
back
in
2016-
worked
with
janna
and
ian.
R
So
that's
so
that's
the
experience
and
the
challenge
we're
facing
so
next
slides.
Please.
So
here
is
a
high
level
design.
So
back
in
2016,
of
course,
there's
no
itf
quick.
So
the
multipath
is
designed
for
google,
quick.
R
So
g
quick
does
not
have
you
know
multiple
clash
ids,
so
we
use
pass
id
to
identify
a
path
and
for
each
pass
there
is
a
packet
number
space
so
to
identify
a
packet
we
use,
plus
id
plus
packet
number
and
design.
Is
we
use
the
unified
egg
frame?
So
basically
the
egg
frame
will
contain
the
package
received
on
all
the
paths,
so
the
expectation
is
the
acknowledgment
will
come
back
from
the
soonest
fastest
pass
and
also,
of
course,
we
use
separate
controller
and
loss
detection
per
pass.
R
Of
course,
three
transmissions
could
go
over
on
different
paths
zelda
than
original.
So
next
slide,
please
so
back
in
that
time
the
implementation
is
a
little
challenging,
especially
because
our
how
our
code
is
written
that
time
that
time,
I
think
so
when
one
packet
is
thicker
lost.
So
we
basically
we
put
out
the
frames
and
re-serialize
to
a
separate
packet,
and
then
we
use
a
struct
to
record
the
connection
between
the
two
packets,
for
example.
One
packet
gets
act.
R
We
consider
both
of
them
gets
acknowledged,
so
that's
becomes
more
challenging
if
we
will
retransmit
packets.
You
know
across
paths.
We
need
to
build
a
even
more
complicated
structure
record
the
connections
between
packets
spreading
a
multiple
path,
but
now
I
think
the
the
implementation
is
easier
because
we
get
rid
of
the
connection
between
packets.
We
only
record
the
the
the
content
in
the
packet,
for
example,
when
a
packet
gets
acknowledged,
so
we
will
report
to
the
corresponding
stream,
say:
hey
this
bunch
of
stream.
Data
has
been
acknowledged
instead
of
packet.
R
So
that
being
said,
the
implementation
can
be
very
difficult,
but
depending
on
your
implementation,
next
slide
piece
so
yeah.
The
next
one
we're
facing
is
scheduling.
I
think
that's
exactly.
As
roberto
said.
We
decided
to
put
motive
pass
in
the
connection.
So
that's
what
we
are
facing
this
I
mean
challenging
scheduling
problem
because
you
will
never
figure
out
a.
I
mean
appropriate
or
good
scheduler
without
knowing
what
what's
the
application
want.
For
example,
the
application
want
to
minimize
latency.
R
We
may
put
packets
always
on
the
path
with
shorthis
rtt
or
if,
if
the
application
cares
about
bandwidth,
we
will
dumping
packet
to
or
available
path,
or
I
mean
the
application
cares
about
data
user
did
plan,
so
we
can
only
send
packets
on
the
wi-fi.
Maybe
very
you
know
important
data
on
our
cellular
or
the
application
is
very
you
know,
sensitive
to
packet
loss,
for
example,
audio
trans
transmission.
R
So
then
we
may
put,
I
mean
redundancy,
understand
the
same
package
over
multiple
paths.
So
without
the
knowledge
I
don't
know
so
how
it's
so
hard
to
develop
a
scheduler
I
mean
which
can
be
used
for
all
kind
of
applications,
and
also
I
mean
inside
google,
so
we
have
never
have
a
buying
from
a
customer
saying
hey.
I
want
discount
scheduler
and
I
want
multi-paths
because
connection
migration
cannot
beat
my
requirement.
R
We
have
never
that
requirement,
so,
basically,
what
we
already
heard
is
why
don't
you
improve
connection
migration
first
and
see
if
it's
enough,
so
inside
google,
we
have
put
much
effort
to
make
connection
migration
work.
So
that's
pretty
good.
Actually.
So
next
slide,
please
so
here's
the
lessons
we
learned
so
because,
after
facing
these
challenges
and
the
implementation,
complexity
and
lacking
of
use
cases,
so
we
basically
remove
the
path
the
code
from
the
from
the
chromium
repository.
R
So
the
lessons
learned
is
the
multi-pass
increases
code,
complexity
yeah.
This
can
be
a
small
huge
depending
upon
the
implementation,
and
you
know
scheduling
is
hard
and
depends
on
the
use
case.
So
if,
when
you,
if
you
are
not
working
closely
with
application,
you
are
likely
to
fail-
and
also
I
mean
most
use
case-
only
need
connect
migration.
I
think
the
good
example
is
android
google
search
app,
so
we
deploy
connection
migration
there
and
so
they're
happy
and
they're
not
asking
for
multi-pass.
A
N
I
would
I
could
reasonably
ask
if
you
think
that
connect
you're
saying
connection
migration
works
well
enough,
so
you
don't
have
to
have
you're.
You
know.
N
Thinking
about
multipath
now
might
make
sense
if
multipath
makes
sense
where
it
didn't
previously,
but
I
think
I'm
also
hearing
you
say
what
roberto
was
saying
about
the
more
you
know
about
what
you're
doing
you
know
the
more
you
know
about
what
you're
doing
with
baltic
bath
the
better
job
you
can
do,
rather
than
being
a
general
purpose
transport.
R
Yeah
totally,
I
mean
that's
kind
of
so
basically
my
understanding
is,
we
really
need
a.
I
mean
killing
use
case,
saying,
hey
connection.
Migration
does
not
meet
our
our
requirement.
We
really
want
to
use
use
multipass
and
they
will
closely
work
with
us
for
developing
a
scheduler
which
meets
their
requirement
so
to
my
experience
that
that
never
happens
inside
of
google.
So
that's
why
we,
we
decided
not
to
go
for
multi-passing
chromium.
H
Thank
you,
ryan.
Next
fan
good
to
see
you
again
just
a
quick
question:
do
you
think
the
new
architecture
for
re-transmissions
being
in
the
stream
instead
of
in
the
packets?
Do
you
think
that
would
simplify
the
implementation
to
the
point
that
it
would
be
straightforward
or
do
you
think
it
would
still
be
huge.
R
Hey
ryan
good
to
see
you
yeah
so
totally.
I
think
that
will
simplify
the
implantation
a
lot,
because
you
know
we
now
we
only
record
the
stream
data,
so
there's
completely
no
connection
between
packets
spending,
multiple
paths,
so
we
exactly
know
so.
The
stream
has
no
idea.
You
know
I
send
data
to
which
pass
to
which
packet
number
it
does
not
care.
If
that
only
cares
is
hey,
this
stream
data
has
been
acknowledged.
I
can
remove
it
or
the
stream
data
is
lost.
R
O
Hey
so
I
heard
a
lot
that
you
were
talking
to
many
clients
at
google,
so
google
search
google
other
client
applications.
I
wonder
if
you
had
any
conversations
with
server
people
if
they
were
ever
interested
in
connection
migration
or
multi-path
on
the
server
side
like,
for
example,
youtube
I'm
just
guessing
right.
I
mean
some
connection
comes
to
one
server,
but
the
object
is
really
in
a
different
one.
We
want
to
migrate
to
the
one
that
has
it
any
conversations
like
that.
R
Yeah
we
did
have
so
actually
I'm
working
on
the
server
side
of
in
a
quick
team.
So
I
think
we
have
previously
a
use
case.
R
We
think
about
is
exactly
said:
hey
the
server
will
send
out
the
preferred
address
after
you
know
finish
the
handshake,
saying:
hey
client,
you
can
you
can
you
can
talk
to
me
to
this
preferred
address
and
then
at
the
time
the
client
basically
have
two
addresses
to
talk
about
this
kind
of
multi-pass,
but
it's
still
talk
to
the
same
server
but
anyway
so,
but
that
to
my
knowledge,
that
does
not
work
greatly
with
our
infrastructure.
R
P
P
G
Yes,
okay,
perfect
webex
shows
me
is
still
on
mute.
Do
you
open
up
the
slides?
I
can't
see
them
at
the
moment.
G
G
G
What
is
tricky
with
siri
is
that
it
is
building
up
state
on
the
server,
as
the
user
is
speaking.
So
it's
not.
It
is
a
connection
that
is
hard
to
recover
right
when
the
connection
is
lost.
There's
basically
the
only
way
to
recover.
It
is
by
sending
all
the
data
yet
again.
So
that
means
that
we
try
very
hard
to
reduce
network
errors
with
siri,
because
it's
so
costly
to
re-establish
a
tcp
connection.
G
The
environment
in
which
siri
is
running
is
a
thin
bi-directional
stream.
So
that
means
the
application
is
sending
small
amounts
of
data
in
a
in
a
regular,
very
short
time
frame,
let's
say,
for
example,
100
bytes,
every
20
milliseconds,
for
example,
and
the
server
keeps
on
responding.
So
it's
very
interactive
traffic.
G
It's
not
a
bulk
data
transfer
and
finally,
siri
is
also
often
used
in
mobile
environments
where
people
are
walking
out
of
the
home
so
that
the
traditional
wi-fi,
fading
away
scenario
is
very
frequent
because
people
use
it
when
they
walk
out
of
their
home
and
say:
hey
siri
turn
off
my
lights
or
whatever
right
next.
So
this
is
the
setting
we
are
in
for
siri
next
slide.
G
The
way
we
use
multipath
is,
first
of
all,
when
we
create
a
siri
connection,
we
establish
immediately
the
tcp
subflows
on
both
wi-fi
and
cell.
So
even
if
wi-fi
is
in
a
very
good
state,
we
immediately
are
basically
warming
up
the
cellular
link
as
well.
This
enables
us
to
basically
switch
from
wi-fi
to
cell
without
having
to
go
through
a
tcp
handshake
right.
We
can
immediately
start
sending
and
also
start
receiving,
of
course,
data
on
the
cellular
link
and
it
also
primes
us
with
an
initial
rtt
measurement.
G
So
that
way
we
know
at
least
we
have
the
wi-fi
link
quality,
a
certain
rtt,
and
we
also
have
an
rtt
measurement
for
the
for
the
cellular
link.
So
we
have
then
those
two
paths
that
are
ready
and
then
we
start
start
scheduling
traffic
into
what
we
call
the
interactive
mode.
So
there's
one
node
that
I
want
to
make.
G
Is
we
published
the
multipath
transport
apis
a
few
years
back
in
different
scheduling
modes,
that
we,
from
our
experience,
are
most
useful
to
the
applications
we
have
handover
mode
and
interactive
mode
and
the
aggregation
mode,
and
so
siri
is
using
the
han
interactive
mode,
because
it's
a
thin
stream,
not
a
lot
of
data,
but
very
late.
It's
sensitive
okay.
G
So,
when
we're
sending
with
are
continuously
evaluating
the
path
characteristics,
we
are
looking
at
the
rtt
and
packet
loss
and
for
every
single
packet
that
we
are
transmitting.
We
decide
what
is
the
most
optimal
path
on
that
particular
moment
in
time,
and
the
best
path
is
basically
rtt
base.
We
choose
the
one
with
the
lowest
rtt
now.
G
If,
for
example,
re-transmissions
are
happening
on
one
path,
we
then
schedule
traffic
on
the
other
path
to
basically
quickly
overcome
the
the
the
delay
that
is
introduced
by
every
transmission
right,
because
the
retransmission
needs
to
be
it
can
be
blocking
the
congestion
window.
G
G
Next
slide:
apple,
music.
So
the
goal
with
apple
music
is
that
we
try
to
reduce
the
playback
stars
and
we
also,
if
we
still
stall,
we
try
to
reduce
the
duration
of
the
stall,
the
environment
that
we
have
is
we
have
a
bud
data
transfer
where
we
are
transmitting
the
entire
song
to
the
device
in
one
transfer,
and
we
also
have
a
play
buffer
that
hi
actually
most
networking
issues
next
slide.
G
So
what
we
are
trying
to
do
here,
because
music
can
transmit
very
large
amount
of
data
right.
We
try
to
minimize
cellular
data
usage
as
much
as
possible.
That's
the
that's
the
ultimate
goal,
and
so
how
we
do
it
is
by
we.
We
avoid
cellular
data
at
all
cost
only
when
we
are
getting
close
to
having
a
problem
in
the
music
streaming
and
that's
when
we
then
bring
up
the
cellular
link
and
then
we
are
because
we
are
very
close
to
basically
having
a
music
stall.
G
That's
when
we
try
transmitting
data
on
both
paths
as
fast
as
possible,
because
we
want
to
avoid
this
music
stall
from
happening.
So
here
what
we
do
is
we
do
actually
resource
pooling
to
aggregate
both
paths
next
slide.
A
L
Yes,
davidskenazi
google
hi
christoph
thanks
thanks
for
the
presentation,
I
think
it's
really
great,
that
you
split
things
out
in
terms
of
okay.
Here
are
the
things
that
apple
cares
about
for
these
services,
and
you
know
requirements
on
quick.
My
question
for
you
on
like
seeing
most
of
your
requirements
on
quick.
They
seem
to
be
already
covered
by
quick
and
connection
migration.
L
G
L
L
With
one
requirement
is
on
the
it
has
to
be
today,
it
has
to
be
decided.
The
client
is
the
one
migrating,
so
it
has
to
be
client
initiated,
but
from
my
understanding
of
the
apple
stack,
that's
what
you
prefer,
because
the
client
knows
the
cost
of
the
interfaces
and
all
that
better
than
the
server
does.
G
G
So
I'm
not
sure
that
if
quick
migration
is
able
to
do
this
right
because
we
will
need
at
each
time
we
switch,
we
need
to
switch
to
congestion
control,
which
I
know
the
christian
has
a
draft
where
he
says
it's
possible
to
do
that.
But.
B
I
would
like
to
hunt
that
part
of
the
discussion
to
the
to
the
general
discussion,
because
so
the
point
was
clarification
on
the
presentation
and
now
we're
doing
clarification
on
quick,
which
was,
I
think,
other
use
cases
will
have
these
questions.
Can
quick
do
this
right,
so
I
think
that
would
be
best
handled
at
the
end.
Thank
you
all
right.
My
apologies
thanks
for
the
answer
christoph
yep.
I
My
drawers,
facebook,
echoing
also,
I
thought
it
was
a
good
presentation.
My
question
is
pretty
brief
day
you
mentioned
apple
music.
Have
does
that
have
any
experience
doing
non-music
streaming
like
I'm,
you
know
video
streaming
because
I
ask
because
I
do
think
the
considerations
are
a
bit
different,
because
you
know
music
data
can
be
very,
very
small,
whereas
even
a
very
a
modestly
sized
video
will
be
larger
than
an
entire
song,
and
I
think
it
kind
of
exacerbates
a
lot
of
the
policy
problems
that
I
see
like
you
know.
I
You
said
you
want
to
minimize
cellular
data
use
and
I
think
that's
a
noble
goal,
but
I
think
that's
somewhat
easier
to
do
when
you're,
like
you're
you're
the
phone
provider
as
opposed
to
another
application
so
like
for
us,
for
example,
if
you're
using
a
facebook
app
communicating
the
fact
that
we're
going
to
do
this
thing
where
we
use
both
interfaces,
I
think,
is
difficult,
especially
for
things
like
video.
So
I
was
just
curious
if
you'd
done
things
that
were
more
data
intensive
than
music
and
siri.
G
Yeah
we
haven't
investigated,
I
mean
we
looked
at
a
video
one
problem
with
video,
except
especially
with
adaptive
bit
rates,
is
that
it
is
estimating
the
bitrate
that
it
is
getting,
and
so
it
could
get
confused
by
multipath.
That's
why
we
didn't
go
down
that
road
at
the
time,
also
because
we
didn't
have
the
time
in
terms
of
data
usage,
that
is
always
a
problem
and
the
system
usually
imposes
restrictions
on
that,
at
least
for
our
multipuff
use
cases
we
have
wi-fi
assist.
G
That
is,
limiting
the
cellular
data
usage
of
each
application
when
wi-fi
is
available,
and
so
that's
like
our
system,
that
is
like
the
police
that
that
mandates.
Okay,
you
are
allowed
to
use
cellular
data
now
and
you
can
only
use
100
megabytes
and
you're
not
going
to
be
able
to
use
them
all.
P
Hi
jenna
younger
thanks
same
thing:
it's
really
useful
to
see
where
these
things
are
being
used,
and
it's
good
to
see
you
as
always
so
the
I
have
two
questions,
one
on
the
siri.
You
mentioned
switching
back
and
forth
between
multiple
parts
at
the
sub
article
at
subarray
time
scales.
Do
you
do
that
more
than
once
per
rtt?
I
would
imagine
not
because
you
would
have
to
switch
wait
for
feedback
and
then
switch
again.
P
So
I
would
imagine
that
you're,
probably
sitting
at
the
highest
rate
you
would
be
switching
is
once
per
rdd.
Is
that
I
have
two
questions,
but
that's
the
first
one
yeah.
P
Yeah,
so
that
that
I
think
that,
just
to
echo
something
that
david
said
earlier,
it
does
put
a
cap
on
how
how
how
you
think
about
resource
pooling
here,
because
you're
not
switching
at
a
millisecond
time.
So
here
I'm
switching
it
back
at
time
scales.
The
second
question
was
on
music
and
and
I'm
recording
martin,
what
martin
announced
on
the
chat
on
the
chat
channel
when
you
do
pool
when
you
do
resource
pool?
P
Are
you
sending
the
same
data
over
multiple
parts
or
are
you
say,
are
you
just
like
trying
to
utilize
both
paths,
bandwidth
maximizing
for
that
yeah.
U
That's
that
and
hi
everyone
from
alibaba
my
college.
In
fact,
I
will
introduce
the
use
cases
in
alibaba's
ecosystems
next
page,
please,
these
use
cases
include
new
retail
ecommerce
use
cases
and
the
mobility
use
cases.
Then
we
will
talk
about
some
5g
and
beyond.
There
will
be
a
summary
of
the
requirements
for
the
pest
management
and
the
package
scheduler
in
multi-pass,
quick
and
a
demo,
video
of
our
deployed
version
of
multi-pass,
quick
in
taobao.
U
Next,
please,
the
the
first
use
case
in
alibaba
is
the
top
of
mobile
application.
Taobao
is
the
most
popular
online
shopping
application
in
china
and
instead
of
just
showing
a
picture,
it
uses
short
form
videos
to
display
the
product
information.
Today,
taba
already
uses
quick
and
shape
3
for
video
downloading,
reducing
startup
delay
and
installing
time
is
critical,
so
we
employ
multiples
quick,
which
uses
both
wi-fi
and
iot.
At
the
same
time,
the
main
purpose
is
to
accelerate
video
downloading
and
reduce
stalling
and
rebuffering,
with
better
quality
of
experience.
U
U
The
problem
is
that
sometimes
the
uplink
is
very
slow,
so
we
want
to
help
the
streamers
to
stream
all
those
with
multi-pass,
quick
to
use
more
bandwidth
through
to
disjoint
network
links.
The
streamers
can
use
a
mobile
phone
with
lte
from
carrier
1
and
use
a
mobile
wi-fi
hotpod
to
connect
to
carrier
2
at.
In
the
same
time,.
T
Another
strong
need
to
deploy
multicast
is
a
high
mobility
application
scenario.
So
one
notable
example
is
a
high
speed
rail
in
china.
In
the
year
2019,
there
are
over
2
billion
passenger
railway
journeys.
However,
you
know
the
train
is
moving
at
a
speed
over
300
kilometers
per
hour.
Handovers
between
base
stations
happen
every
13
seconds,
so,
even
though
the
train
is
equipped
with
wi-fi.
T
And
in
summary,
we
present
some
of
the
empty
user
cases
right
now.
Alibaba
has
major
interest
in
and
two
things
we
want
to
emphasize
in
this
presentation.
First,
in
our
implementation,
we
build
multi-pass
quick
over
bi-directional
sub-connection
concept,
and
this
makes
things
simple
and
enable
the
use
of
most
quick
transport
design,
and
second,
we
employ
a
dynamic
scheduling
strategy
with
feedback
to
optimize
performance
and
also
want
to
echo
feng
yang,
just
from
google
just
said
so
scheduling
we
found.
T
That
is
really
important,
because
in
order
to
utilize
mp,
we
that
need
to
understand
what
the
application
was,
and
so
we
in
our
design,
we
use
a
dynamic
scheduling
strategy
which
allows
the
client
and
server
interaction
to
support
application
awareness
and
before
conclusion
here
is
a
demo
where
we
have
integrated
the
mp
quick
into
taobao
and
ali
cloud
cdn,
which
is
the
largest
shopping
app
in
china,
and
we
contrast
the
mp
quick
on
the
left,
with
with
the
single
pass
click
on
the
right,
as
is
shown
mp,
quick,
is
effective
in
offering
much
smaller
startup
delay
and
much
less
stalling.
A
Thank
you,
you
talked
about
high-speed
trains
and
that
was
a
high-speed
talk.
Thank
you.
I
don't
see
anyone
in
the
queue
does.
Does
anyone
want
to
jump
in
when
we
have
the
opportunity.
B
B
Okay,
thank
you.
Now,
I'm
going
to
ask
the
same,
because
some
other
slides
also
have
mp
quick
on
it,
and
often
that's
that
is
used
as
a
you
know,
acronym
for
multipath
quick,
but
it's
also
the
name
of
one
particular
proposal,
which
makes
it
a
little
bit
confusing.
Thank
you.
That
was
a
good
talk.
B
P
Janayangar
thanks
for
that,
thanks
for
that
talk,
I
I'm
I'm
trying
to.
There
are
a
number
of
things
that
I'm
missing
here
so
so
maybe
you
could
help
me
understand
this.
I
didn't
fully
grow,
so
these
are
all
using
some
form
of
multi-pathway
which
you,
which
you
clarify.
That
is
not
actually
the
draft
that
we
have
out
there.
Oh,
but
what's
the
server?
What's
the
client
and
are
you
using
this
in
production?
Those
are
three
questions
just
about
the
setup
that
you
have
here.
T
Oh
okay,
I
I
can
answer
that
one
by
one.
So
the
first
question
is
okay,
so
about
about
the
deployment
right.
So
the
first
thing
is:
we
deploy
the
protocol
in
both
the
our
audi
cloud
cdn
and
also
the
the
client
right,
the
kind
of
the
apps
such
as
the
taobao
right,
it's
a
like
amazon.
It's
an
online
shopping
app,
and
this
is
how
the
department
is
made.
T
P
Sure
so
have
you
have
you
experimented
with
connection
migration
in
quick.
T
Yes,
we
have
experimented
with
connection
migration,
so
one
issue
we
found
that
is
so
sometimes
let's
say
right.
So
the
cache
connection
is
not
break
the
connection.
It's
just.
The
bandwidth
is
not
enough,
for
example,
for
the
the
the
taobao
live
streaming
case
right.
So
we
have
many
celebrity
internet
celebrities
who
lives
on
streaming
and
they
want
to
do
streaming
all
doors
and
what
they
find
out.
T
Is
they
care
a
lot
about
performance
because
they
make
money
on
that
right,
so
they
want
to
make
sure
that
their
internet
is
very
stable
and
and
and-
and
so
that's
why
they
want
to
have
multiple
upgrades,
especially
in
some
area-
is
that
in
your
area
in
china,
where
the
uplink
internet
activity
is
not
very
good.
So
if
we
deploy
empty
quick,
this
gives
them
an
opportunity
to
double
their
bandwidth.
T
So
that's
one
use
cases
and
the
other
cases
is
about
the
high
speed
trend
right
because
I
think
the
lt
and
4g
they
already
have
done
a
lot
of
job
in
doing
the
handover
right.
So
but
the
the
thing
is,
the
high
speed
train
is
traveled
so
fast.
So
when
you
have
hand
over
every
like
10
seconds
right,
it's
still
difficult
to
make
sure
that
the
internet
connectivity
is
good
and
and
because
the
high-speed
train
right
now
there
is
high
signature
wi-fi
and
it
it
so.
The
wi-fi
works
like
this.
T
It
has
a
hotspot
inside
the
carriage.
It
also
has
an
antenna
outside
the
the
train
which
can
connect
to
the
cellular.
But
the
thing
is
there
are
so
many
people
sharing
the
same
wi-fi
in
the
same
carriage,
so
everyone
gets
a
very
limited
bandwidth,
so
what
they
want
to
do
is
they
say?
Okay,
I
also
have
the
seller.
So
why
can
I
use
both
the
wi-fi
and
my
own
seller?
At
the
same
same
time?
So
and
then
our
design
is
is
made
to
for
the
need
of
these
customers.
P
No
I'm
going
to
be
I'm
going
to
I
just.
I
would
love
to
see
more
details
on
what
actually
you're
doing
here,
because
I
I
you
said
you're
not
doing
the
draft
but
you're
doing
your
own
thing
and
I'd
very,
very
much
curious
to
understand
exactly
what
you're
doing.
So,
if
you
were
able
to
publish
a
draft
or
something
like
that,
that
would
be
super
useful.
Oh
okay,.
T
P
O
B
And
jenna
there's
an
x,
quick
channel
on
slack,
which
is
their
implementation,
which
I
think
they're
planning
to
open
source
at
some
point
this
year.
L
Yes,
very
quick,
thank
you
for
the
presentation.
Two
short
questions.
First,
one
was
this
mp
quick
end-to-end
from
the
client
to
server,
or
was
it
with
a
proxy
involved.
L
Thank
you
second
question.
You
mentioned,
like
users
care
about
performance.
I
totally
agree
what
specifically
you
mean
you
mentioned
bandwidth,
so
I
see
you've
measured
that
and
you've
shown
results.
Did
you
also
measure
latency
and
things
like
that?.
T
Yes,
yes,
so
I
I
think
one
thing
I
want
to
emphasize
here
is
so
in
different
areas
right,
the
latency
can
vary
so,
especially
when
you
use
wi-fi
and
lte.
It
probably
goes
through
different
isp
right
so
and
and
and
based
on
the
measurement,
and
sometimes
you
need
to
change
scheduling
policies.
That's
why
I,
I
think
the
design,
a
good
scheduler
for
amp
quick,
is
the
most
important
thing.
L
B
V
V
V
V
We
also
switch
at
sub-rtt
levels,
but
given
that
the
rtt
of
the
satellite
link
is
something
like
600
milliseconds,
that's
not
too
surprising
and
of
course,
we
also
allow
bandwidth
aggregation.
V
V
We
came
up
with
the
following
requirements:
first
and
probably
the
most
important
one
before
the
satellite
link
can
be
utilized
in
a
multi-path
scenario,
it
should
perform
well
in
a
single
path
scenario.
V
Second,
there
is
the
need
for
fine
grained
scheduling
among
both
links.
I
don't
know
if
connection
migration
would
be
sufficient
for
this
again.
Scheduling
for
such
link
combination
is
quite
difficult.
Even
both
parts
of
our
path
are,
they
are
available
simultaneously.
V
The
next
point
is
that
we
would
like
to
use
the
satellite
links
as
soon
as
possible.
So
in
the
best
case,
you
would
set
up
the
quick
connection
for
each
password
of
zero
rtt.
V
Anyway,
we
didn't
test
the
multipath
quick
implementation.
Yet
so
I
really
cannot
tell
how
good
our
use
case
will
perform
in
an
end-to-end
setup.
We
also
thought
about
multipath
and
mask
proxies,
but
we
neither
have
results
for
this
yet
from
an
architecture
point
of
view,
this
is
a
hybrid
access
network
with
difficult
link
characteristics.
V
O
I
A
Okay-
and
I
guess
we'll
move
on
to
christian-
please.
V
Not
yet
this
is
the
outcome
of
a
research
project
and
we're
starting
with
some
field
tests
next
and
yeah
so,
and
we
also
have
to
do
tests
with
multipath
quick
first
so
we're,
I
guess
in
a
very
very
early
stage.
A
Okay,
we
we
don't
have
any
more
in
the
queue.
So
I
think
thank
you
very
much
for
the
presentation
and
I
think
we
can
move
on
and
take
some
time
back.
B
Yep
olivia's
next,
let's
hope
the
audio
is
better
earlier.
Can
you
try?
Yes?
Is
it
better?
It's
perfect!
Thank
you.
I'll
run
it
from
here.
If
you
like,.
S
S
So
the
the
use
case
is
that
there
are
hybrid
access,
routers,
so
hybrid
cpus
that
have
dsl
and
4g
connectivity,
and
these
are
deployed
in
the
field
and
when,
as
a
user,
you
have
this
kind
of
device.
Basically,
what
you
get
usually
is
that
you
can
use
either
dsl
or
4g,
and
you
use
4g
as
a
backup,
while
the
user
would
like
to
be
able
to
use
both
dsl
and
4g,
especially
in
rural
areas,
where
you
don't
have
enough
connectivity
enough
performance
from
the
dsl
network
next
slide.
S
S
Slide,
yes,
so
the
solution
allows
end
users
to
use
regular
tcp,
so
single
pass
tcp
and
there
is
a
transparent
proxy
on
the
hybrid
access
router
that
we
proxy
the
connection
so
that
it
becomes
a
multi-pass,
tcp
connection
that
goes
over
the
dsl
and
the
4g
network,
and
it
goes
through
a
specific
device
in
the
specific
proxy
which
is
called
the
hybrid
access
gateway
in
the
operators,
networks
that
does
the
conversion
between
mptcp
and
tcp,
so
that
it
speaks
tcp
to
the
external
servers
and
that's
similar
as
an
architecture
to
the
protocols
that
are
used
for
atss
in
5g.
S
S
The
main
benefit
is
that
it
provides
bandwidth
aggregation
which
is
key
for
the
low
and
the
medium
bandwidth
speed
networks
that
exist
in
many
in
many
countries
in
many
regions
throughout
the
world,
and
one
advantage
is
that
the
congestion
control
allows
to
automatically
adjust
the
bandwidth
to
the
network
capacity
and
sense
the
available
capacity
and
the
operators
that
have
deployed
this
solution.
Usually
they
want
to
prioritize
dsl
over
the
lte
network,
and
so
they
do
this
using
two
different
techniques.
S
The
first
one
is
the
path
manager,
which
is
the
algorithm
that
decides
when
to
create
a
subfloor
over
one
specific
path,
and
so
they
can
delay
the
creation
of
the
lte
subpro.
If
the
dsl
link
is
not
fully
used,
and
then
there
is
an
algorithm
which
is
called
the
packet
level
scheduler
that
operates
at
the
packet
transmission
level
and
that
prefers
the
dsl
pass
over
the
lte
one
next
slide.
S
So
now,
let's
look
at
what
mp
quick
could
do
in
a
grid
access
network
and
what
it
would
bring
with
the
requirements.
I
think
that
there
are,
there
would
be
true,
but
three
benefits
with
multi-pass,
quick
in
hybrid
access
networks.
First,
is
that
by
leveraging
the
datagram
extension
from
from
quick,
it's
possible
to
provide
bandwidth
aggregation
for
non-tcp
flows
using
mp,
quick.
S
The
second
is
that
mp,
quick
could
be
an
over-the-top
solution
that
would
aggregate
bandwidth
for
any
combination
of
different
access
links,
even
from
different
isps
going
through
an
aggregate
gateway
that
would
run
in
the
cloud,
and
the
third
advantage
is
that,
well,
mp,
quick
would
be
implemented
in
user
space,
which
would
not
require
kernel
changes
in
the
access
routers
in
next
slide.
S
So
what
are
the
requirements
for
np,
quick
in
this
kind
of
networks?
I
think
the
requirements
are
that
you
need
to
be
able
to
learn
the
availability
of
different
paths
and
addresses
and
on
mobile
devices.
Dispatch
and
addresses
can
change
over
time.
It
should
be
possible
to
start
and
stop
using
a
path.
It
means
that
you
need
to
have
a
path
manager,
which
is
the
intelligence
that
decides
when
and
how
how
to
create
sub-flows.
S
It
must
provide
aggregation.
So
you
need
to
be
able
to
simultaneously
send
packets
over
the
two
paths
or
sometimes
more
parts
than
that
in
some
scenarios.
You
want
to
prefer
some
paths
over
orders,
so
you
need
a
packet
scheduler
and
you
need
to
be
able
to
sense
the
performance
of
the
different
paths
by
using
ping
frames,
compression
control
and
other
techniques,
so
that
you
can.
A
Thank
you
olivier.
We
have
matt
draws
in
the
queue
and
nobody
else.
If
you
want
to
get
in
the
cube,
please
do
now
I'll
be
cutting
it
shortly.
Go
ahead.
I
S
I
S
So
then
you
could,
you
could
send
the
the
quick
data
graphs.
Is
that
over
the
two
paths.
L
Sorry
waiting
for
it
to
unmute
there
davidskenazi
google
just
to
kind
of
build
up
on
matt's
question
olivier
when
you
send
the
quick
packets
over
datagram
frames
on
both.
You
won't
have
access
to
the
quick
packet
numbers
so
because
of
the
different
latencies
on
a
different
path.
You
could
introduce
a
lot
of
reordering,
which
you
know
we
know
as
with
tcp
causes
problems.
Have
you
implement
this
implemented
this
and
measured
if
it
causes
any
performance
problems.
S
So
we've
done
some
experiments
with
that,
so
it's
it
works
and
there
are.
There
is
other
works
that
looks
at
putting
timestamp
and
other
information
to
be
able
to
reorder
the
quick
packets
or
other
packets
at
the
at
the
pro
at
the
the
server
side,
so
that
you
can
avoid
resequencing
if
it
causes
problems.
L
S
Yeah,
it
really
depends
on
the
implementation,
so
we
looked
one
year
ago
at
quick
implementations
and
they
did
not
cope
well
with
reordering,
while
tcp
implementation
would
cope
much
better
with
rolling,
at
least
when
looking
at
the
recent
linux
tcp
implementation-
and
I
guess
some
quick
implementations
have
evolved
now
and
they
deal
better
with
reordering
than
what
they
did
one
or
two
years
ago.
So
this
is
changing
quickly.
B
N
N
So
I
just
want
to
be
sure
and
mention
that
that
this
is
not
us
being
the
atss
whisperer
and
we're
not
wearing
any
3gpp
or
itf
hats.
We
have
some
links
here
that
people
may
find
helpful
and
I
will
go
to
the
next
slide
please.
N
So
I
really
want
to
talk
not
so
much
about
ats
as
ats,
but
as
a
multi-path
technology
and
atss
uses
only
two
paths.
One
three
p
p
one
non
three
gpp
in
their
in
their
work.
They
get
rules
and
that
assigns
nodes
modes
for
flow.
The
vocabulary
that
they
use,
which
I
think
is
helpful,
is
steering,
is
selecting
a
path.
Switching
is
selecting
a
different
path
and
splitting
is
using
multiple
paths
simultaneously.
N
So
what
we're
talking
about?
What
I'm
talking
about
in
my
presentation
is
the
combination
of
things
from
atss
and
enhanced
atss.
N
So
there's
this
basically
atss
is
two
different
protocols.
One
is
using
mptcp
one
is
using
traffic
aggregation
without
any
specific
protocol
between
the
tunnel
endpoints.
N
What
they've
been
able
to
achieve
so
far
has
been
for,
for
non-tcp
traffic
has
been
steering
and
switching
only.
What
they're
trying
to
do
in
in
enhanced
atss
is
splitting
for
non-tcp
traffic
of
any
ip
and
ethernet
traffic
and
they're
also
talking
about
support
from
additional
et
atss
modes
beyond.
What's
what's
in,
what's
in
atss,
initially
next
slide,
please.
N
So
this
is
basically
this
is
basically
just
to
say
this
is
these.
Are
this?
Is
the
tunneling
process
slash
proxying?
They
have
proposals
to
do
both
of
those
from
from
the
combination
of
what's
in
what's
in
atss
and
e
atss,
but
one
but
wanted
to
just
say.
Basically
these
are
you
know
these
are
the
the
end
points
between
the
user
equipment
and
the
the
tunnel
endpoint
before
you
get
to
servers
when
we
say
mp
to
quick,
I'm
really
talking
about
multi-path
quick.
N
This
picture
is
out
of
a
320pp
document,
so
we
didn't.
We
didn't
change
the
the
labels,
but
don't
don't
pay
a
lot
of
attention
to
that
detail
and
next
deck
slide.
Please.
N
Oh
just
a
minute
so
just
to
mention
the
campus
enterprise
use
case
this.
This
is
this
is
very
much
the
similar
to
the
to
the
hybrid
access
use
case
that
olivier
presented
very
capably
and
just
with
some
additional
details
on
benefits
for
the
user
and
for
the
access
provider.
N
Next
slide,
please
so
the
atss
modes.
These
are
already
deployed,
active
standby
and
smallest
delay
based
on
rtt
and
load
balancing.
N
Some
of
these
can
work
pretty
well,
I
think,
with
my
connection,
migration
and
quick
version.
One
some
of
them
do
require
some
multi-path
capability
and
priority
is
really
interesting
because
you're,
basically
moving
from
one
you're,
basically
moving
from
one
path
to
another,
based
on
congestion
being
a
counter
encountered,
but
the
way
it
works
in
the
priority
based
mode.
N
Is
that
you
don't
you
you
still
you're
you're,
still
only
using
one
path
so
that
you,
if
you
wanted
to
do
both
paths
after
you
encounter
a
congestion
you're
going
to
be,
you
need
a
multi-path
capability.
Also,
next
slide,
please.
N
These
include
things
that
are
under
discussion
like
changing
the
access
split
weights
dynamically,
so
that,
basically
you
know
what's
what's
in
atss
is
only
is
only
using
you
only
you
only
you
only
change
the
split
if
you
have
a
a
path
that
goes
down,
so
this
is
being
able
to
change
access
floating
dynamically
forwarding
on
both
accesses
when
necessary,
to
provide
redundancy
this
one,
I
really
kind
of
dig
forwarding
on
both
accesses
if
the
rtt
difference
between
those
two
paths
is
below
a
threshold
so
that
basically
they
become
functionally
equivalent
to
each
other,
but
if
they're
not,
then
you
go
on
the
you
go
on
the
one,
with
the
shortest
rtt
and
kind
of
a
catch-all,
the
ua
ue,
making
decisions
about
oblique
access
on
its
own,
based
on
lots
of
things.
N
None
of
these,
I
think,
can
be
supported
only
using
only
migration,
that's
as
defined
in
quick
version
one.
So
my
point
of
showing
that
slide
is
basically
to
say
that
is
basically
to
say
that
where
we
seem
to
be
headed
is
a
lot
more
dynamic
use
of
use
of
multiple
paths
next
slide,
please
so,
like
I
said
what
we're
trying
to
do
is
support
traffic
splitting
across
multiple
accesses
for
any
ip
or
ethernet
traffic
with
in-order
delivery.
N
We
really
hope
we
can
do
that
with
multi-path,
quick,
because
it's
building
on
the
synergies
of
the
quick
stack.
We
know
that
the
ues
are
going
to
already
have
a
quick
stack.
So
if
we
don't
do
something
with
quick
we're
going
to
be
doing
it
some
other
way
and
like
I
said
what
we're
really,
what
we're
really
shooting
for
is
simultaneous
use
of
multiple
paths
with
in-order
delivery
within
a
stream
split
over
multiple
paths.
I
think
that's.
My
last
slide
questions.
W
Martin
hi
spencer,
thanks
for
this-
I
I
maybe
I'm
not
following,
but
I
think
what
you're
showing
us
is
customer,
quick
packets,
encapsulated
inside
datagram
frames
of
of
an
outer
quick
connection
is
that
accurate.
N
I
think
I
think
that's
going
to
be
true.
I
should
mention
the
people
that
are
looking
at
this
right
now
are,
I
think,
are
at
the
architecture
level,
and
that's
that's
going
to
be
the
stage
two
level
and
that's
going
to
be
something
that
the
stage
three
guys
are
actually
going
to
be
discussing
in
more
detail.
So.
N
I
so
if
you,
if
you
back
up
to
maybe
what
the
third
slide
yeah
so
basically
the
the
picture,
that's
the
picture.
That's
here
is
what's
deployed
now,
which
is
which
is
like.
I
said.
One
of
the
things
is
multipath
tcp
and
the
other
is
a
low
layer,
atss
ll,
low
layer
solution
now,
or
something
like
that.
So
I
think
they're
trying
to
basically
reason
from
what
they're
what
they're
working
with
now
to
where
they
should
be
going.
N
I
think
it's
fair
for
me
to
say
that
this
work
is
probably
more
probably
early
in
the
process
that
you're
going
to
get
from
most
sdos
coming
into
the
itf,
but
3gbp
really
wanted
to
work
closely
with
the
itf
so
that
they
weren't
trying
to
do
duplicate
things
that
the
itf
was
going
to
be
doing
so
you
know,
that's,
that's
that's
what
you're,
what
you!
What
you
just
asked
is
a
very
interesting
question.
Thank
you.
F
More
so
what
you
described
is
actually
atss
lower
layer,
so
there's
no
additional
protocol
and
this
atss
lower
layer
mode
is
only
supporting,
steering
and
switching
what
they
don't
want
to
do
is
really
use
both
of
the
paths
simultaneously,
because
that
would
introduce
a
lot
of
reordering
and
that
have
can
have
negative
effects,
especially
as
you
don't
know
like
you
want
to
realize
this
for
all
kind
of
mip
and
ethernet
traffic.
So
you
don't
know
what
the
traffic
above
this,
so
they
try
to
avoid
reordering.
That's
like
the
main
point
here.
X
Oh
hello,
good
evening,
yeah
richard
brady
baby
say
here
I
just
want
to
do
you
have
to
know
what
the
anticipated
time
scale
for
this
at
sss
work
is.
Is
it
a
release,
17
3gpp
item?
Do
they
want
to
get
it
done
and
dusted
in
the
next
18
months
that
that
is?
That
is,
that
is
the
current
plan
of
record
in.
N
F
Okay,
sorry,
I
should
have
said
my
name,
amir
cool.
Let
me
jump
in
again
so
yeah,
so
this
is
under
discussion.
Currently,
there
are
different
solutions
proposed
for
release
17.
due
to
the
whole
crisis.
There's
also
discussion
about
extending
the
deadlines
for
release
17
by
six
months,
and
so
I
think
what
they
need
is
really.
They
need
some
kind
of
stable
or
adopted
working
group
draft.
F
That's
yeah.
We
have
there's.
There
are
many
specifications
in
3gbp
where
there's
not
an
rfc.
Yet
there
is
like
a
lot
of
discussion
in
the
gpu
right
now
how
mature
this
is,
and
it's
definitely
not
mature
enough,
as
we
don't
know
that
the
quick
working
group
will
work
on
it.
But
as
soon
as
there's
a
working
document,
there
could
be
like
a
level
where
they
could
just
move
on.
N
Okay,
cool
thanks.
I
think
it's
if
I
could
just
how
about
how
long
it's
about
we.
L
Hi
david,
thank
you.
It's
me
again,
david
kenozzi,
google,
so
spencer,
thanks
all
for
the
presentation,
especially
the
terminology
that
helped
a
lot,
because
it's
very
different
from
what
I'm
personally
used
to
so
that
was
great.
You
described
benefits
of
this
project
in
terms
of
like
network
like
I
can
use
both
paths
and
I
think
that's
cool
sorry
great.
I
have
a
police
car
outside,
but
the
what
I'm
kind
of
having
a
hard
time
understanding
is
what
is
the
benefit
to
the
user?
In
your
mind,.
N
So
I
I
think,
the
the
the
the
benefit
so
this
this
is
this
is
the
this.
Is
the
provider
view
of
what
we're
talking
about
and
I
think
the
use
case,
many
of
the
other
use
case
presentations
have
been
about
the
use
cases
that
the
providers
are
expecting
to
have
running
over
their
networks.
L
Well,
let
me
I'm
not
sure
I
understood.
Let
me
rephrase
what
you
just
said
to
make
sure
I
understood
it
sounds
like
you're.
N
L
N
Yes,
I
think
mostly
what
I'm
saying
I
I
think
mostly
what
I'm
saying
is
that
those
discussions
happened
in
3gpp
a
long
time
ago.
L
Okay
and
and
we
don't-
we
don't-
know
the
outcome
of
those
conversations.
F
Put
you
on
the
spot,
yeah.
Let
me
let
me
jump
in
for
a
second
again,
so
I
think
in
the
setup
you
have
here,
the
benefit
you
get
is
that
you
actually
have
a
direct
interface
between
the
ue
and
the
and
the.
O
F
As
well
as
the
the
proxy
endpoint,
which
is
directly
in
the
network,
has
knowledge
about
what's
going
in
the
network,
so
you
can
actually
provide
some
guidance
to
the
ue
just
to
to
give
more
information
about
which
path
to
use.
L
Y
F
If,
if
the
server
provides
multiple
support,
you
could
as
well
do
this
end
to
end
right,
but
then
you
don't
get
any
information
from
the
network
about
what
the
network
conditions
are
currently
and
which
pass
is
the
better
one
to
use
for
for
you
right
now
in
your
current
situation
and
that's
the
benefit
you
get
here.
L
Okay,
I'm
sorry,
maybe
I'm
doing
a
very
poor
job
of
explaining
myself
as
an
end
user.
I'm
not
talking
about
me
as
someone
who
works
in
elkington
standards,
I'm
talking
about
an
end
user
who
wants
to
visit
a
web
page
watch
a
video
live
stream
whatever.
What
is
the
benefit
to
the
end
user,
like
to
someone
that
is
non-technical.
F
So
I
think
you're
asking
you're
asking
a
a
question
about
how
tech
is,
for
example,
put
in
the
tariffs,
and
there
is,
for
example,
the
idea
that
if
you
have
80
sss
and
the
network
can
tell
you
to
offload
your
your
traffic
to
a
managed
network
to
an
ipad
network
as
soon
as
you
can,
then
you
get
like
unlimited
data
or
whatever.
That
might
be
a
subscription
that
you
want
to
choose,
and
that
requires
this
kind
of
function.
Support
in
the
network.
But
that's
really
not
the
technical
question
here.
L
B
So
sorry,
mayhem
stepping
so
the
benefits
we
got
to
move
on.
This
is
a
general
discussion,
we're
already
eating
half
of
our
general
discussion
time
and
mike
wins
the
price
for
for
the
last
agenda
bash,
which
happened,
and
he
promised
he
can
do
this.
So
there's
one
more
use
case
that
wasn't
on
the
agenda
when
we
started,
which
is
here
now,
so
he
has
one
slide
in
one
minute
and
there's
not
going
to
be
any
questions
mike.
So
you
better
nail
this
one.
J
Okay,
I'll
do
my
best,
so
the
very
short
version
is
that
we
have
been
talking
about
a
scenario
where
clients
make
requests.
We
have
lots
of
different
instances
that
may
or
may
not
be
the
same
location.
The
instance
that
gets
the
client's
connection
might
not
have
all
the
resources
that
the
client's
requesting
and
so
right
now.
J
That
instance
has
to
go
fetch
those
resources
from
other
nodes
that
do
have
a
copy,
and
it
would
be
nice
if
it
were
possible
for
those
other
nodes
to
send
the
content
directly
back
to
the
client,
which
basically
means
a
distributed.
Server
instance
the
that's
something
that
we
can
do
with
quick
right
now.
Potentially,
you
would
have
to
have
some
coordination
between
the
instances
give
it
a
range
of
packet
numbers
that
can
use,
but
it
produces
a
lot
of
apparent
reordering
because
they
aren't
able
to
coordinate
sending
packet
numbers
in
the
right
sequence.
J
If
you
have
separate
packet
number
spaces
and
multi-path,
you
could
probably
make
this
look
a
lot
cleaner,
but
it
would
be
nice
if
certain
things
about
the
path
can
happen
on
each
path
and
the
client
would
still
see
it
as
a
distributed
server
with
multiple
ip
addresses
and
slightly
over
my
minute.
So
there
you
go.
B
So
mike
one
disclosure
as
a
third
party
here,
is
that
I
believe
that
nokia
holds
ipr
on
this
idea,
at
least
for
multipath
tcp,
and
I
know
this
because
it's
my
patent.
B
B
Yeah,
so
sorry,
it's
been
a
long
day,
so
the
purpose
of
the
meeting
was
to
figure
out
what
the
these
use
cases
are
in
a
bit
more
detail
and
what
requirements
they
would
have
for
functionality.
B
That's
maybe
missing
in
quick
version,
one
and-
and
I
think
this
was
helpful-
there
was
a
lot
to
chew
on
and
it
was
a
little
bit
all
over
the
place,
but
I
would
specifically
maybe
use
this
remaining
20
minutes
to
see
what
use
cases,
if
any,
I
think,
we've
seen
this
in
the
beginning-
could
maybe
already
be
supported
with
what
version
one
offers
a
specifically
migration
and
migration
to
a
preferred
address,
and
then
a
second
part
of
the
discussion
would
be
whether
there
are
other
pieces
of
functionality
that
we
might
want
to
consider
adding
to
version
one
in
some
form.
B
That
would
maybe
support
many,
hopefully,
of
these
other
use.
B
Cases
and
I'm
guessing
there's
a
queue
building
that
lucas
is
going
to
run
again
and
then
please
keep
it
short.
We
only
have
20
minutes
left.
W
Yes,
speaking
of
individual
martin,
duke
f5
to
the
so,
I
think
migration
is
very,
very
close
to
solving
many
of
these
use
cases.
But
one
thing
that
I'm
seeing
is
that
there's
a
lot
of
hot,
switching
back
and
forth
between
the
ideal
path
and
the
current
migration
assumption
is
that
once
you
migrate
off
a
path,
you
can
more
or
less
throw
it
away
so
that
that
drives
us.
So
I
think,
like
there's
only
a
small
amount
of
protocol,
you
need
to
resolve
that.
W
I
think
christian's
draft
actually
has
a
lot
of
the
pieces
you
need.
The
other
thing
I'd
say
is
that
we
saw
a
lot
of
different
use
cases
and
those
use
cases
implied
a
whole
bunch
of
different
schedulers.
W
D
D
I
think
that
roberto
is
right,
it'd
be
better
to
have
several
connections
that
can
be
coordinated
with
a
lightweight
cod
management
system,
so
that
hey,
I
do
that
connection
and
then
I
do
another
connection
to
exactly
the
same
server
and
then
the
application
sees
those
connections
and
does
its
scaling
of
them.
However,
it's
his
feet,
and
that
seems
like
a
much
better
plan
than
trying
to
build
a
ton
of
complexity
inside
the
quick
stack.
Q
Yeah
thanks.
I
was
curious
to
hear
a
little
bit
more
from
quickly
since
they
do
something
very
similar
to
this
on
the
direct
server
return
stuff.
Q
But
I
I
tend
to
think
that
there
are
two
states
in
the
world
one
in
which
only
one
server
is
sending
like
a
video
playback,
where
the
video
is
all
on
one
server,
in
which
case
you
don't
need
to
deal
with
your
rearing
problem
in
a
special
way,
and
things
largely
just
work
and
the
existing
solution
is,
is
pretty
sensible
and
you
can
just
do
dsr
and
then
the
other
state
of
the
world,
where
the
at
frequency
draft
solves
your
problem
by
just
allowing
you
to
control
acknowledgements
and
ignore
reordering
entirely,
and
I
think
that's
probably
what
quickly
is
doing.
Q
Z
This
is
typing
up
my
response,
so
I
think
this
is
an
interesting
discussion.
There's
a
bunch
of
things
here
that
are
interesting.
Christian
pointed
something
out
that
I
think
I
was
about
to
say
as
well.
Z
Is
that
there's
interesting
interactions
between
the
application
and
the
and
the
thing
and
quick,
perhaps
is
a
little
bit
less
susceptible
to
the
sorts
of
opacity
problems
that
result
from
embedding
these
sorts
of
capabilities
deep
in
the
network
or
deep
in
the
os,
where
you
don't
have
a
lot
of
visibility
into
what's
going
on,
but
I
want
to
up
level
a
little
bit.
Z
The
question
that
I
would
prefer
is
ask
is
what
should
the
quick
working
group
be
working
on
next,
rather
than
should
we
be
doing
multipath,
because
I
I
think
that
I've
been
convinced,
at
least
from
this,
that
there
is,
is
some
value
in
some
cases
in
having
multipath
at
some
point
in
the
future.
Z
The
problem,
though,
is
that
I
don't
think
that
we'll
get
the
sort
of
synergies
to
use
the
the
point
that
spencer
brought
up
the
synergies
with
the
deployed
protocol.
If
we
allow
the
protocol
to
fork-
and
I
don't
see
that
the
people
who
are
deploying
quick
actively
right
now
being
interested
in
in
doing
some
multicast.
Z
A
Thank
you,
martin.
Next,
eric.
AA
That
was
new
for
me,
so
I
just
wanted
to
say
thank
you
to
the
folks
who
are
presenting
that
the
last
thing
here
specifically
to
what
mt
was
just
saying
is,
I
think,
roberto's
point
about
applications
needing
to
be
able
to
express
their
scheduling.
Needs
is
really
really
good,
and
so
either
applications
need
to
be
able
to
tell
the
system
hey
here
are
my
scheduling
needs
and
the
system
can
deal
with
the
policy
or
the
system
needs
to
be
able
to
handle
the
application.
This
is
what
the
policy
is.
You
know
your
scheduling
needs.
AA
Please
do
the
right
thing,
but
it
seems
like
a
lot
of
those
questions.
Aren't
let's
get
the
quick
working
group
to
do
a
bunch
of
protocol
design
to
make
stuff
work?
A
lot
of
those
questions
are:
how
do
we
deal
with
policy
and
platform
design?
Not
how
do
we
build
a
protocol
that
we
can
actually
deploy.
F
Yeah
hi,
this
is
mia
coolivan
yeah.
I
wanted
to
just
react
to
roberto's
because
he
like
he
was
just
like
you
know
stating
this
out
and
then
it's
gone
by
that
by
the
time
we
didn't
have
a
chance
to
come,
and
I
really
disagree
with
his
point
of
view.
I
don't
think
it's
it's
an
accident
to
have
it
in
the
transport
layer
having
multi-pass
control
in
the
transport
layer
enables
a
lot
of
optimizations
that
you
can't
do
in
the
application.
F
You
can
re-transmit
your
packets
that
got
lost
on
one
path
on
the
other
pass
and
so
on.
So
that's
where
the
benefit
comes
from.
I
agree
that,
like
for
scheduling,
you
need
some
kind
of
input
from
the
application,
but
what
we've
seen
so
far,
I
mean
there's
a
lot
of
room
for
research
and
finding
new
fancy
schedulers.
But
what
we've
seen
so
far
is
that
you
usually
have
a
handful
of
schedulers
which
are
optimized
for
specific
scenarios
and
the
the
application
can
just
choose
between
those
schedulers.
F
It's
very
simple
interface
that
we
need,
but
that's
it.
So
I
don't
think:
that's
that's
the
the
challenge
here,
but
the
benefits
from
integrating
this
into
the
transport
layer
are
much
higher,
and
I
think
this
is
a
general
feature
that
especially
smaller
applications
that
don't
put
like
all
the
effort
in
developing
their
own
stack
and
so
on
can
benefit
from
in
future.
If
it's
just
there
as
a
general
feature.
T
I
I
want
to
share
more
of
our
experience
with
with
the
module
I
we
definitely
agree
that
we
should
keep
things
simple
right
so
and
and
on
the
other
side,
I
think,
based
on
our
experience,
the
current
quick,
if
users
truly
want
to
or
a
customer
truly
want
to
use
multicast
is
still
need
some
additional
capability.
T
So
I
I
would
like
to
I,
I
think
the
the
best
way
is
to
review
all
the
proposals
and
to
see
if
there
is
a
simple
way
for
us
to
enable
this
capability,
and
I
definitely
agree
that
also
a
lot
of
the
policy
and
a
lot
of
the
especially
for
the
scheduler.
It
needs
to
be
decided
by
the
schedule
and
also
one
comments
on
on
the
on
the
money
side.
T
So
we
do
see
so
one
issue
right
so
coming
from
multitask
quick,
is
that
for
some
customer,
the
traffic
charges
for
cellular
and
wi-fi
they're
they're
different,
and
so
I
think
I
think
one
way
is
at
least
for
us-
is
to
collaborate
with
some
mobile
carriers
and
so
that
we
know
that
when
the
customers
are
using
our
app
and
they
can
get
free
charge
for
both
wi-fi
and
lte,
and
so
we
so
that
that's
how
we
tackle
this
problem.
A
Thank
you
next
up,
jhana,
just
to
remind
people,
we
have
nine
minutes
left
of
the
session.
I
would
guess
lars
wants
some
time
to
wrap
up
at
the
end.
So
please
please
keep
it
oh
good,
so
I
got
eight
minutes.
P
All
right,
just
a
few
quick
points,
one
on
the
on
the
policy
and
scheduling
thing.
I
I
think
eric
said
it
really
well,
there's
a.
P
The
use
of
multiple
parts
is
fundamentally
something
that
is
tied
very
closely
to
the
network
and
to
the
application.
There's
just
no
two
ways
about
it:
you
either
make
the
api
really
thin,
but
in
the
application
and
transport
or
you
make
the
application
decide.
That's
a
question
of
where
you
draw
the
line
of
the
api
and
what
really
you
expose
through
the
api,
but
fundamentally
multipath
is
tied
deeply
to
its
use
case.
P
This
is
something
that
I
I
really
was
hoping
that
people
would
bring
out
more
during
the
presentations,
but
I
I
feel
like
it
didn't
come
through
very
much.
I
think
a
lot
some
presentations
which
did
make
it
more
explicit,
but
the
application
is
super
critical
in
terms
of
figuring
out
exactly
what
multipath
even
means
another
way
of
reading.
Another
saying
this
is
that
the
charter,
as
we
have
it
right
now,
quick,
we've
effectively
done
multipath
connection.
Migration
is
multipath
for
people
who
don't
believe
that
I'm
happy
to
have
a
conversation
separately.
P
Use
of
multiple
parts
is
what
multipath
is,
and
so
arguably
connection
migration
is
already
doing
that.
So
the
question
for
me
still
in
a
number
of
cases
remains:
what
experience
do
we
have
with
connection
migration?
How
can
we
tweak
it
to
make
things
work
better?
This
brings
me
back
to
this
point
of
priorities
and
and
our
lack
of
experience
and
and
what
martin
said
right
up
at
the
beginning
up
leveling
this
whole
conversation
again.
P
P
How
much
does
that
give
us
and
how
much
do
we
need
to
add
to
it
and
how
do
we
increase
it?
I
have
actually
use
cases
that
right
now
I
haven't
really
figured
out
if
the
signaling
in
multiple
is
going
to
be
enough
for
me
and
that's
something
I'd
like
to
see
happen,
but
I'm
waiting
to
get
multipath
waiting
to
get
quick
deployed
with
connection
migration,
so
that
I
can
figure
out
exactly
what
more
pieces
I
need.
P
So
I
would
say
that
we,
we
really
should
have
the
discussion
around
priorities
of
what
we
want
to
get,
and
I
really
want
to
see
version
negotiation
happen.
That's
the
first
thing
I
want
to
see
happen
next,
as
a
working
group,
we
can
keep
bringing
on
more
things,
but
we
need
to
prioritize,
and
my
other
point
is
about
just
lack
of
experience.
A
Thank
you,
david.
L
Please
I'll
keep
it
short.
So
what
I'm
getting
from
this
meeting
is?
We
have
two
main
use
cases
for
multipath.
One
is
multi-path
to
a
proxy,
so
atss
and
one
is
multi-path
end-to-end.
So
I
just
want
to
make
a
point
about
each
the
first
one.
So
from
the
discussion
earlier,
it
sounds
like
there's
no
clear
benefit
to
end
users
and
I'm
gonna
quote
rc
8890,
which
came
out
earlier
this
year.
The
internet
is
for
end
users,
so
that's
really
what
we
should
focus
on.
L
So
I
really
don't
think
that
quick
is
a
good
fit
for
atss.
If
the
3gpp
wants
to
work
on
this,
that's
good,
but
if
they're
asking
us
to
remove
encryption
and
to
add
complexity,
I'm
not
sure
why
they're
using
quick
in
the
first
place,
so
I
would
suggest
perhaps
using
something
different.
Maybe
you
know
gre
or
some
other
encapsulation
between
ip
and
ip
and
I'm
seeing
plus
ones
in
the
chat.
Thank
you
folks.
Now,
on
the
second
point,
end-to-end
multipath,
this
one,
I
think,
might
have
value
to
the
end
user.
L
The
question
is,
I'm
not
going
to
repeat
it.
I'm
just
going
to
agree
strongly
with
janna.
We
need
to
prove
that
this
is
better
than
connection
migration.
All
of
the
data
I've
seen
so
far
doesn't
take
into
account
connection
migration
and
then,
until
we
do
that
I'd
say
we
need
to
prioritize
other
work
in
the
quick
working
group,
such
as
the
extensions
that
we
have
already
adopted
and
probably
other
things,
that's
it.
Thank
you.
A
AB
So
I
just
wanted
to
make
a
small
comment
regarding
studying
policies.
Well,
I
agree
with
miri
that
the
transport
can
provide
a
generic
mechanism
for
a
from
which
the
application
can
use
some
kind
of
policy
or
design
policy.
AB
N
Sorry
so
I
I
just
real
quickly
wanted
to
say:
experimental
was
good
enough
for
what
I
was
hoping
for
from
a
multi-path.
Quick
is
experimental,
not
good
enough
for
any.
Is
there
anyone
for
whom
experimental
is
not
good
enough.
S
Yes,
so
I
just
want
to
answer
martin
and
roberto
about
where
to
put
multibus,
is
it
in
layer
3
or
is
it
above
in
the
application
layer?
I
think
multipath
belongs
to
layer
4,
because
this
is
where
we
have
conjunction
control
information,
and
this
is
where
we
have
with
transmission
capabilities
and
both
work
together.
T
So
I
want
to
give
one
comment
on
the
connection
migration.
So
so
one
thing
also,
we
figure
that
the
connection,
if
you
compare
connection
migration
with
lte
right
so
can
you
can
connection
migration
in
quick,
do
better
than
lte
hand
over
right
because
lte
between
the
base
station,
there
is
actual
interface
for
exchanging
the
contacts,
and
so
so,
given
that,
for
some
extreme
mobility
use
cases
I
think
with
multi-faith,
can
be
better
than
even
the
lte
handover.
So
that's
one
of
the
comments
I
want
to
make
to
anyone.
B
Nothing
then
thanking
robin.
First
of
all,
who's
been
taking
our
minutes.
There
was
a
heroic
effort.
It
seems
clear
that
we.
O
B
But
eric
thank
you
both,
it
seems
clearer.
We
need
to
talk
more.
I
encourage
you
to
use
the
mailing
list
for
this.
Rather
than
than
github
issues
or
something
like
that,
it's
a
better
vessel.
We
will
monitor
the
discussion,
we'll
we'll
probably
have
a
follow-up
discussion
on
the
list.
It's
always
possible
to
have
another
one
of
these,
maybe
with
more
time
for
discussion
than
we
had.
B
I
also
encourage
people
that
sort
of
presented
use
cases
and
saw
similarities
between
them
to
maybe
you
know,
talk
amongst
yourselves
and
figure
out
whether
you
can
consolidate
your
use
cases
or
come
up
with
some.
You
know
joint
proposals
for
what
could
be
taken
forward,
but
I
think
this,
the
the
this
is
a
hard
problem
right,
because
the
the
it's
unclear
how
much
technology,
how
much
technology
it's
too
late.
For
me
to
do
this,
it's
unclear
what
functionality
is
missing
at
the
moment.
B
It's
unclear
whether
the
stacks
that
are
at
the
moment,
getting
very
large
deployments,
are
even
interested
in
deploying
this.
It's
it's
some
some
scheduling
questions
are
unclear.
You
know
what
information
the
application
needs
to
have.
Whatever
information,
the
quick
stack
needs
to
have
and
so
on.
So
this
is
a
hard
problem
and
we
did
multiple
tcp
right.
It
was
like
a
three
year:
eu
funded
research
project
and
then
a
whole
bunch
of
standardization
at
the
end
of
it
and
then
a
whole
bunch
of
implementation
work.
B
We
can
leverage
that,
to
some
degree,
but
but
quick
also
brings
new
new
challenges,
and
so
we
need
to
carefully
think
what
we
want
to
do.
I
hope
we
can
find
something
that
is
small
and
that
will
enable
experimentation
and
that
might
already
support
some
of
the
use
cases
directly
that
we
can
start
with
and
then
continue
to
to
like
move
forward.
If,
if
this
gets
traction,
I
I
I'm
hesitant
to
sort
of
see
us
doing,
you
know
a
full
multipath
architecture
in
like
one
go.
That
would
be
a
pretty
heavy
lift.
B
I
think,
with
that,
it's
one
minute
passed
and
webex
hasn't
cut
us
off,
so
I
must
have
configured
it
correctly.
To
not
do
that,
which
still
means
we're
done,
use
the
mailing
list.
You
know
talk
to
each
other
talk
to
lucas
and
me
see
if
we
need
more
discussion
time
like
this
and
we'll
see.
If
we
have
some
agenda
time
in
well,
I
would
say
in
bangkok,
but
I
guess
at
itf
109..
Thank
you
all
good
evening,
good
day,
good
night
good
morning.