►
From YouTube: IETF100-QUIC-20171115-0930
Description
QUIC meeting session at IETF100
2017/11/15 0930
https://datatracker.ietf.org/meeting/100/proceedings/
B
A
B
A
B
B
C
We
had
a
problem
with
the
etherpad
yesterday
that
once
it
got
over
about
20
people
in
the
ether
pad,
it
got
pretty
melty,
oh,
so,
if
you're
not
helping,
don't
go
on
the
either
pet.
Let's
thank
you.
B
Okay,
well
welcome
to
soar
out
of
sync.
Okay,
let's
fix
that.
Welcome
to
quick.
This
is
our
second
session.
This
is
the
no
12
statement,
as
we've
said
many
times
before.
These
are
the
intellectual
property
terms
under
which
you
participate.
So
if
you're
not
familiar
with
this,
please
go
and
have
a
look
at
ia.
Tf
note
well
on
your
most
convenient
internet
search
engine.
It's
important
to
understand
this.
Can
everyone
in
the
back
hear
me?
Is
this
a
good
volume
I
need
positive,
okay,
good!
Thank
you.
D
A
B
Likewise,
just
a
reminder,
as
we've
done
a
few
times
recently,
we
expect
this
to
be
a
professional
environment.
If
you
have
issues
that
you
can't
handle
by
talking
to
the
person
who
you're
concerned
about
or
by
talking
to
ask
your
chairs,
we
also
have
an
Ombuds
team.
I
said
it
right
who
can
handle
issues,
especially
regarding
harassment
and
those
are
the
folks
at
the
bottom
of
the
slide.
So
today,
I
will
send
the
blue
sheets
out
in
a
second,
we
have
a
scribe.
Thank
you.
Brian
Trammell.
B
This
is
actually
a
little
out
of
date.
We're
going
to
take
the
item
that
we
postponed
yesterday,
Martin,
giving
a
summary
the
changes
by
the
editors
and
put
that
at
the
top
of
the
agenda,
then
ian
is
going
to
talk
about
loss,
recovery,
John's,
gonna,
talk
about
connection
migration
and
we're
gonna
talk
about
the
third
implementation
draft
and
what
it
might
contain.
And
finally,
if
we
have
time
we'll
discuss
any
open
issues,
any
other
items
agenda-
bashing,
ok,
great,
let's
start
so
Martin
Europe.
E
E
Ok,
this
should
be
history
for
people.
This
is
what
people
were
concentrating
on
during
the
hackathon,
which
was
draft
7.
We
didn't
do
a
lot
in
that
draft.
We
just
made
a
few
tweaks.
We
use
AES
GCM
to
protect
the
handshake
packets.
We
got
rid
of
the
1
RT
t,
long-form
headers,
our
packet
forms
and
there's
a
bunch
of
changes
that
don't
quite
complete
the
discussion
about
closing
but
got
most
of
the
way
there
and
I've
got
a
little
bit
more
on
that.
E
Here
is
that
there's
three
basic
modes
of
operation,
one
is
there's
an
idle
time
out
that
both
sides
advertise
when
that
timer
runs
it's
actually
in
negotiation,
I,
say
X.
Someone
else
says
y.
The
minimum
of
those
two
is
the
time
that
the
connection
can
remain
idle
before
it
starts
timing
out,
there's
also
the
immediate
close
which
we
decided
would
be
used
for
the
graceful
shutdown
case.
So
the
application
is
done
with
the
connection
or
it
detects
an
error.
It
sends
a
message.
E
We
have
two
of
those
now
look
at
the
draft
for
the
details
and
then
there's
a
stateless
reset.
So
a
lot
of
the
discussions
sort
of
focused
on
this,
the
states.
During
these
transitions
we
have
essentially
two
states
now
and
there's
a
pull
request.
That
needs
a
little
bit
more
work
before
it
goes
in,
but
essentially
we
have
the
one
timer
and
then
you
transition
between
two
states
in
that
time
of
the
first.
E
The
first
state
is
only
really
triggered
in
the
case
where
you
send
one
of
these
clothes
messages
and
that's
the
closing
State,
better
names,
people
that
doesn't
really
matter
in
that
state.
You
don't
send
any
packets
except
for
a
closed
message
in
response
to
receiving
packets
from
the
other
side.
So
in
case
your
closed
message
was
lost,
you
provide
more
of
them
just
so
make
sure
they
get
through.
The
second
state
is
the
draining
period,
which
is
essentially
you
keep
your
state
around
for
a
little
bit
of
time
to
make
of
that.
E
No
incoming
packets
for
that
connection,
reordered,
packets,
those
sorts
of
things
make
sure
they
don't
get
treated
as
if
they
were
new
connections
or
anything
like
that.
You
can
associate
them
correctly
with
the
the
connection
state
and
throw
them
away
properly
without
triggering
any
sort
of
other
machinery.
There's
a
bit
of
discussion
about
that
and
I
think
Christian
raised
a
few
points
of
that.
Well,
what
state
do
you
actually
need
to
maintain
in
each
one
of
these
things,
because
there's
not
a
lot
of
information
you
need
to
maintain
in
order
to
make
these
decisions
correctly.
E
You
both
sides
go
into
this
draining
state
and
the
reason
that
both
sides
going
to
the
straining
state
is
that
they're
in
individual
views
of
when
the
period
starts
is
skewed
based
on
time
that
packets
take
to
deliver.
Maybe
there
were
spurious
retransmissions
or
something
like
that.
The
cause
that
two
sides
to
have
some
slight
disagreement.
Current
draft
says
three
RTO.
That's
also
up
for
negotiation,
some
discussion
than
in
the
doc
about
how
you
might
short
shortcut
that,
in
certain
circumstances,
next,
the
immediate
close
is
more
interesting.
E
I've
shown
one
of
the
more
complicated
scenarios
here,
so
a
detects
some
sort
of
error
and
sends
a
connection
close
that
connection
close,
could
be
lost
and
I've
shown
that
disappearing
into
the
ether.
In
this
case,
B
continues
on
happily
ignorant
of
the
fact
that
has
generated
some
sort
of
error
and
sends
packets
to
a
a
because
it's
in
this
closing
state
detects
that
I've
got
an
error
on
the
slide,
but
it
it
sees
one
of
those
packets
coming
in
and
sends
a
connection
close
out,
so
that
this
is
essentially
lost.
Recovery
for
the
think.
E
The
amount
of
state
that
the
that
a
is
maintaining
in
this
case
is
pretty
minimal.
It
can
actually
just
save
the
packet
that
received
and
there's
some
question
about
whether
it
even
maintains
decryption
keys
for
the
for
the
incoming
packets.
But
that's
something
we're
going
to
sort
out.
B
can
send
a
connection
close
to
sort
of
shortcut
this
whole
process.
We
had
a
bit
of
discussion
with
with
implementers
about
this
one
and
they
were
concerned
that
in
these
shutdown
scenarios
there
wasn't
any
way
to
sort
of
guarantee
termination
in
a
reasonable
period
of
time.
E
So,
though,
I
had
having
test
cases
that
sort
of
in
this
sort
of
trailing
state
and
if
you're
operating
test
cases
in
a
in
a
sort
of
lossless
environment
being
able
to
send
a
closed
in
response
to
a
closed
is
actually
quite
nice,
because
then
both
sides
enter
into
a
state
whether
they're
essentially
done
and
there's
no
more
packets
needing
to
be
exchanged.
So
we
allow
that
to
happen
once,
but
we
don't
allow
that
to
be
repaired.
Otherwise
we
get
into
a
ping
pong
situation.
There's
a
bit
of
debate
about
this
particular
point
as
well.
E
Christiaan
suggests
that
we
could
use
a
particular
error
code
for
this
situation
and
allowed
B
to
continue
sending
but
I,
don't
actually
think
that's
necessary,
but
we
haven't
resolved.
That
particular
part
of
the
conversation.
The
error
on
the
slide
is
that
once
a
has
received
a
positive
indication
that
B
is
closing,
it
doesn't
need
to
send
connection
close
anymore,
and
so
it
enters
the
draining
state
also
and
both
both
of
those
just
quietly
go
away
at
that
point.
E
Next,
that's
actually
in
the
text
there,
but
never
mind.
Now.
Stateless
reset
is
really
quite
simple.
The
server
doesn't
maintain
state,
so
if
it
receives
a
packet
that
it
doesn't
know
about,
it
can
send
a
stateless
reset
and
the
client
then
enters
a
draining
period.
Just
in
case
this
was
a
reordered
packet
or
something
along
those
lines.
E
E
When
we
looked
into
this,
we
discovered
that
clot
stream
0,
which
we
use
for
special
purposes,
was
being
allocated
I,
think
it
was
unidirectional
server
or
something
like
that
in
the
first
iteration
of
this,
and
we
managed
to
get
it
to
bi-directional
server
initiated,
but
then
that
didn't
make
a
whole
lot
of
sense,
because
clients
initiated
this
dream,
so
we
flipped
the
bit
and
now
it
makes
a
little
bit
more
sense.
So
people
who
actually
implement
this
using
a
standard
stream
machinery
will
be
able
to
say
well.
E
This
is
just
a
bi-directional
stream
that
the
client
initiated
and
they
can
use
all
the
standard
stream
machinery
without
special
logic
other
than
the
other
than
the
necessary
special
logic
in
order
to
handle.
This
consequence
of
that
is
now
that
we've
flipped
the
bit
the
lowest
bit.
So
anyone
who's
used
to
saying
that
clients
initiate
requests
on
stream
3,
for
instance,
will
now
find
that
it's
stream
for
instead
and
it's
4
8
12
16,
because
we
now
have
more
bits
consumed.
E
There's
new
state
machines
coming
I
think
we're
gonna
go
through
in
a
little
bit
more
detail
about
how
the
individual
sides
of
a
stream
transition
through
states
so
we'll
have
a
sentence,
ID
state
and
a
receive
state
side
state,
and
it
will
be
up
to
implementations
to
synthesize
a
state
if
they
want
across
a
bi-directional
stream,
if
that's
at
all
interesting
to
them.
But
I,
don't
think
we're.
Gonna
spend
a
whole
lot
of
effort
on
that.
E
Previously
we
had
a
mix
of
different
things
for
all
of
our
different
numbers
in
the
protocol
we
had
8
bits
and
16
bits
and
32,000
64's
and,
and
then
we
had
these
weird
things
where
they
had
flags
over
here
and
it
was
either
8
16
12,
because
there's
many
many
different
ways
in
which
we
could
represent
a
number,
especially
especially
like
the
the
bespoke
16
bit
sort
of
floating-point
format
that
we
had.
That
was
that
was
great.
It
fit
the
purpose,
but
just
another
format
that
we
had
to
deal
with.
E
So
now
we
just
have
one:
it's
a
variable
size
encoding.
We
take
the
first
two
bits
and
that
encodes
a
size,
8
16,
32
64
and
the
remainder
of
her
to
be
beginning.
Your
number
leads
to
some
interesting
maximums
for
these,
but
it's
it
actually
works
out
pretty
well.
We
decided
to
also
move
this
to
the
HTTP
mapping,
conveniently
it
has
the
property
of
allowing
us
to
remove
continuations
for
header
box,
which
was
something
that
people
really
really
hated
in
HTTP
2,
and
that
is
no
more.
E
It
also
means
that
most
cases,
the
the
frame
size
will
actually
be
smaller.
Http
2
frames
have
a
three
bike,
lengths
I
believe
and
now,
for
the
most
part,
we'll
have
ones
and
two
by
plates,
which
is
which
is
quite
nice,
so
I
did
not
check
those
numbers,
particularly
that's
just
examples
of
things
that
that
he
did
yes,
6
14,
30
and
62
than
other
new
numbers.
E
There's
some
address
validation,
stuff
that
in
the
draft
now
so
someone
who
detects
a
connection
migration
can
can
sort
of
probe
that
particular
path
and
see
that
there's
actually
someone
live
on
the
other
end
and
return
route
ability
checks
are
quite
nice,
we've
renamed
the
packet
types
which
doesn't
seem
like
it's
much,
but
we
also
remove
the
seperation
between
client
and
shake
and
server
handshake
packets.
They
no
longer
have
a
different
packet
time.
I
think
there
was
something
that
was
one
other
thing
that
I
added
to
this
slide,
but
I've
forgotten.
E
E
E
And
I
did
in
my
own
copy
and
didn't
upload
the
changes,
my
apologies
so
so
Mike
and
I
have
been
talking
about
moving
to
a
model
where
requests
in
HTTP
ID
have
their
own
identifier
at
the
app
at
the
HTTP
layer.
So
when
we,
when
we
made
some
changes
for
push,
we
created
this
notion
of
a
push
identifier,
and
so
when
you,
when
you
make
a
push
promise,
it
has
an
identifier,
that's
explicit
at
the
edge
HTTP
layer
and
then
when
you
provide
the
response
in
that
unidirectional
stream,
that's
coming
back
now.
E
It
includes
that
identify.
Currently,
we
we
identify
requests
by
their
stream
identifiers,
their
quick
stream
identifiers,
and
the
suggestion
here
was
to
provide
an
HTTP
layer,
identifier,
that's
explicit
at
that
layer,
and
it
allows
people
to
implement
transports
that
don't
expose
the
stream
identifier
to
the
application
layer.
A.
G
I
asked
because,
if
you
wanted
to
make
it
trivial
to
use
the
same
mechanism
for
something
else
like
DNS
DNS
uses
you
name
for
this
exact
purpose
and
having
the
identifier
enable
that
would
be
handy.
It's
certainly
possible
to
map
a
queue
name
into
a
an
identifier
that
is
an
is
a
number
and
to
get
around
this
problem.
But
I
just
wanted
to
raise
the
fact
that
in
other
protocols,
using
a
string
might
be
handy.
So
this.
G
E
Okay,
so
then
DNS
might
use
its
what
@q
name
or
something
else.
It
doesn't
really
matter
that
they
have
that
better
choice.
This
would
be
specific
to
http,
but
we'd
be
establishing
I,
guess
a
best
practice
in
a
sense,
they're
saying
don't
rely
so
much
on
the
underlying
identifies
rely
on
make
your
own
identifies
for
this
and
I
think
several
people
have
expressed.
This
is
patent
they'd
like
to
see
Ronnie.
H
E
The
stream
states
are
different
between
bi-directional
and
unidirectional,
but
when
you
actually
look
at
the
at
the
independent
state
machines,
the
the
primary
difference
between
unidirectional
and
bi-directional
is
that
there's
an
opening
transition
invited
in
for
a
bi-directional
stream
when
the
the
paired
side
opens.
So
if,
if
the,
if
one
side
opens
on
a
bi-directional
string,
the
other
side
is
implicitly
opened
at
to
match
and
that's
the
primary
difference
beyond
that
point,
doesn't
there's
not
a
lot
of
difference
there?
E
The
the
things
that
the
things
that
sort
of
come
up
as
a
result
of
that
is,
if
you
are
blocked
on
a
particular
stream,
you
might
send
a
string
block
thing
that
will
cause
the
stream
to
open
in
the
opposite
side
in
the
opposite
direction,
those
sorts
of
things
where
it
gets
interesting,
but
other
than
that,
once
you're
in
steady
state
or
you've
transmitted.
All
the
data
on
all
the
states
are
exactly
the
same.
They
closed
the
same
way
and
and
everything
that
we
have
independent
read
and
write
closed.
E
I
Was
gonna
add
to
that
better
sorry,
Ian,
sweat,
Google,
Martin's,
PR,
that's
in
flight
and
hopefully
we'll
be
landed
soon.
There's
a
very
nice
job
of
like
clarifying
all
this
so
either.
If
you're
super
interested
in,
like
stream,
state
machines,
look
at
the
PR
and
if
you're
only
kind
of
interested,
then
like
wait,
408
yeah.
B
J
I
So
some
of
the
principles
packets
are
never
retransmitted.
Information
in
them
is
so
yeah
focus
on
what
to
declare
loss,
not
what
to
send
so
another
principle:
is
you
wait
until
you
actually
have
information
in
order
to
make
a
decision?
So
an
example?
Is
we
don't
mark
packets
lost
until
an
acknowledgement
for
a
packet
that
was
sent
after
it
is
received
kind
of
makes
sense,
but
makes
things
a
lot
simpler?
I
We
also
avoid
undue
in
general,
in
particular,
F
RTO,
if
you're
familiar
with
that,
which
is
the
process
where
you
might
reach
and
do
a
retransmission
time
out
and
then
decide
who
is
spurious
afterwards
and
then
try
to
like
unwind
your
state.
We
don't
do
that
because
unwinding
stage
is
complicated
and
bug
prone.
Instead,
we
just
wait
until
we
actually
know
whether
the
RTO
is
various
and
then
adjust
the
congestion
window.
At
that
point,
then
so,
and
in
the
meantime
we
basically
like
clamp
down
the
congestion
window.
I
So
it's
not
actually
it's
like
basically
temporarily
clamped
down,
but
we
don't
actually
change
any
of
the
congestion
control
state
so
turns
out
to
work
out
a
little
bit
simpler.
Next,
one
notable
differences
from
TCP
quick
doesn't
have
retransmission
ambiguity,
because
technically
no
packet
is
ever
really
retransmitted.
It
is,
you
know
the
same
frames.
Maybe
rebound
'old
with
a
new
packet
number
or
slightly
different
frames,
may
be
bundled
with
a
packet
number.
I
So
once
you
hear
an
acknowledgment
for
a
packet,
that
means
the
pure
has
received
it
and
processed
it,
which
is
next
next
item,
and
you
know
from
that
point,
you
don't
have
to
keep
it
in
your
send
buffer
or
any
other
sort
of
things
that
you
might
have
to
do
for
TCP
with
what's
acts
which
are
revocable
and
recovery
ends
when
the
packet
number,
not
a
sequence
number
after
the
largest
outstanding
packet
is
acknowledged.
So
this
is
sort
of
a
subtle
point,
so
I'll
try
to
elaborate
on
it.
In
TCP.
I
Recovery
is
the
period
of
time
you
enter
when
you
first
experience
a
loss
and
it
lasts
for
approximately
an
RTT
in
TCP.
However,
in
order
to
exit
recovery,
you
must
have
packets
that
are
sent
in
sort
of
like,
essentially,
all
the
lost
packets
have
to
be
filled
in
and
your
sequence
number
space
has
to
be
filled
in.
I
K
B
I
B
I
Okay,
so
now
I'm,
actually
gonna
step
through
the
the
two
types
of
loss
detection
in
quick,
one
is
act
based.
So
this
is
when
you
receive
an
acknowledgement
for
a
packet
that
is
larger,
then
you
know
some
previously
sent
packet.
So
you
know
I
received
an
acknowledgement
for
six
and
maybe
not
for
something
below
six,
as
this
example
will
show,
and
then
the
other
type
is
going
to
be
timer
based
so
like
care,
loss,
probes,
retransmission
times
out,
timeouts
and
and
the
such.
I
I
So
in
this
case
it's
basically
a
simple
matter
of
math.
Is
we
compute
like
whether
you
know
I
have
a
packet
number
that
is
three
or
more
above
a
packet
that
has
not
been
acknowledged
so
next
slide,
so
the
receiver
receives
one
and
two
sends
an
acknowledgement
frame
to
the
sender.
Next
slide.
Three
four
and
five
are
dropped.
So
six
is
still
in
flight
receiver.
I
Next
slide
received
six
and
immediately
sends
an
acknowledgment
for
one
and
two
as
well
as
six
next
slide,
and
when
that
acknowledgement
is
received,
the
receiver
declares
three
lost
so
because
it's
you
know
a
packet
distance
of
three.
In
this
example,
nothing
is
immediately
rejected,
just
because
I
don't
want
to
really
kind
of
go
into
that
at
the
moment.
But
yes,
as
if
three
was
retransmitted,
seven,
it
could
be
used
to
to
recover
for
so
I.
Actually,
I'm
gonna
go
into
a
different
example,
which
is
why
I'm
not
gonna
retransmits
reach.
I
Thank
you
next
slide,
currently
retransmitted.
So
early
retransmit,
as
with
TCP
kicks
in
anytime,
like
the
largest
outstanding
packet,
is
going
to
be
acknowledged,
is
acknowledged
and
there
are
packets
underneath
they
have
not
been
acknowledged.
So
this
is
an
effort
to
speed
up
lost
detection
in
cases
when
you
have
losses
near
the
tail.
I
I
Timer
base
lost
detection,
so
timer
based
loss
detection
is
probably
a
little
more
more
different
from
TCP
than
the
regular
fact
style
and
early
retransmit
cases.
I
just
ran
through
time
out.
Packets
do
count
towards
bytes
in
flight,
so
in
quick,
bytes
in-flight
can
exceed
the
congestion
window
for
a
period
of
time.
So
again
it
was
that
kind
of
ensures
that
you
actually
get
a
correct
congestion
control
signal,
even
if
you're
in
the
timeout
sort
of
mode
time
of
events
do
not
immediately
cause
packets
to
be
declared
lost
or
changed
the
congestion
control
state.
I
So
essentially
the
idea
is
I,
don't
know
anything
all
I
know
is
I
even
heard
anything,
so
that
could
mean
I.
Have
you
know
a
temporary
Wi-Fi
outage
or
it
could
mean
just
like
you
know.
Suddenly,
a
bunch
of
packets
are
in
the
queue,
and
you
know
everything's
fine,
so
I'm
gonna
wait
until
I
actually
know
something
to
adjust
the
congestion
window
or
declare
anything
lost.
So
therefore
yeah
the
ce-1
does
not
change
during
these
periods
of
time
and.
M
I
True
for
all
three
types
of
time
out,
I'm
gonna
walk
to
next
slide,
so
the
handshake
timeout
is
set
to
the
RTT,
which
is
either
a
measured,
RTD
or
initial
R
to
D
times
two
to
the
number
of
timeouts
plus
one.
So
essentially,
if
you
have
a
initial
RTT
like
quick,
does
of
a
hundred
milliseconds,
then
the
first
time
out
for
a
handshake
would
be
200,
milliseconds
and
then
400
and
800
and
so
forth,
except
from
the
last
cent
handshake
frame.
I
So
it's
kind
of
like
the
most
recent
handshake
gram
and
you
retransmit
all
not
acknowledge
tangent
packets.
So
this
is
a
little
bit
more
aggressive
than
the
other
time
out
modes
potentially.
But
the
idea
is
that
you
know
the
handshake
should
be
relatively
small
and
additionally,
it's
very
important
that
it
actually
completes,
because
otherwise
you
can't
really
make
any
progress
on
the
connection.
I
Sorry
and
yes,
the
expected
RT
t,
if
you,
if
you
don't
actually
have
an
R
DT
sample,
as
for
the
very
first
packet,
is
based,
possibly
on
previous
connections,
if
you're
doing
RT
0
R
2
T
resumption,
it
could
also
be
based
on
other
environmental
factors.
Like
I
know,
I'm
on
like
a
2g
network,
so
I
can
set
it
to
2
seconds
I
mean.
Certainly,
there
are
tweaks
that
can
be
made
there
next
slide.
I
G
I
Would
say:
that's
yes,
but
you
make
an
interesting
point.
The
the
original
pink
cream
was
retransmitted
well
because
actually
had
no
payload
and
we've
kind
of
tweaked,
the
the
meaning
of
the
the
ping
and
pong
frames
to
be
a
little
bit
different,
but
the
the
short
answer
is
yes,
essentially,
it's
anything,
but
an
acknowledgment
anytime.
You
actually
like
need
to
make
it
through
it.
Make.
I
Next
slide,
retransmission
time,
oh,
so
this
is
the
classic
retransmission
time
out
formula.
You
have
smooth
round-trip
time
plus
4
times
the
RTT
variants.
You
also
have
a
min
retransmission
time
out.
We,
as
as
with
many
of
these
other
constants.
We
are
currently
using
the
Linux
default,
which
I
believe
is
200
milliseconds
and
it
has
exponential
back-off,
so
quick
it
was
currently
is
setting
it.
I
So
when,
if
you're
doing
you
know
tail
off
probe
twice
and
then
retransmission
time,
I
would
say
once
or
twice
at
some
point,
one
of
two
things
is
going
to
happen
either
at
the
connection
is
going
to
timeout,
because
you
know
you
just
get
black
hold
or
eventually
you
are
going
to
receive
an
acknowledgment.
When
you
receive
an
acknowledgment
at
that
point.
In
the
case
of
the
retransmission
timeout,
you
will
go
down
to
the
classic
white
minimum
congestion
window,
which
we
default
to.
I
If
it
turns
out,
the
retransmission
time
out
was
was
valid.
Ie
the
retransmission
time
out,
packets
got
acknowledged,
but
nothing
previous
did.
Similarly,
if
any
of
these
packets
are
acknowledged
like
the
Kaos
probe
or
otherwise
packets,
besides
the
retransmission
time-
oh,
you
know
normal
loss.
Recovery
will
kick
in
so
even
though
we
don't
declare
a
packet
lost
when
we
do
at
a
loss
probe
any
previously
sent
packet.
If
the
tale
us
probe
packet
is
acknowledged-
and
it
hasn't-
you
know
the
previous
packets
haven't
been
delivered,
then
yes,
they
will
be
declared
lost
just
like
normal.
I
D
D
I
N
N
Gen
Iyengar,
yes,
TCP
is
supposed
to
send
one,
but
the
cost
of
free
time
spending
after
a
time
or
is
incredibly
high
and
the
whole
point
of
sending
one
packet
makes
that
one
packet
incredibly
fragile.
If
you
lose
it,
then
you
have
to
wait
for
twice
the
RTO
before
you
do
any
sort
of
recovery
again
so
sending
to
protects
you
from
that
fragility.
I
think
those
value
and
considering
this
for
TCP
as
well.
To
be
honest,.
I
So
so
yeah
I'm
gonna
actually
go
the
constants
in
the
next
slide,
but
I
think
there
definitely
are
things
here
that
are
taken
as
a
model
from
you
know
what
Linux
does
as
default
and
not
necessarily
from
from
the
RFC's,
and
so
like
that
the
constants
are
sort
of
in
formulas
are
sort
of
a
mixed
at
you.
That.
N
Is
okay
but
I
think
like
whenever
we
are
deviating?
We
should
probably
document
it,
so
this
is
perhaps
in
July,
and
this
is
perhaps
an
interesting
question
number
some
of
the
constants
that
you
will
see
here
are
also
aggressive,
because
we
found
them
to
be
quite
useful
and
and
and
quick
is
definitely
driving
towards,
like
the
handshake
time
out,
for
example,
next
slide.
N
And
some
of
them
aren't
even
what
Linux
does
one
of
the
things
that
is
DLP
in
quick.
We
do
two
tailless
probes
before
we
go
to
an
RTO,
whereas
in
Linux
it's
actually
one
TLP
before
we
go
to
an
RTO,
and
some
of
this
is
designed
to
be
try.
It
is
necessarily
trying
to
be
more
more
aggressive
in
terms
of
recovery,
but
still
safe,
because
the
fundamental
mechanisms
are
still
the
same.
I.
N
N
The
the
only
reason
so
so
we
should
discuss
that
I
think
the
constant.
So
a
big
part
of
this
is
basically
discussing
the
constants
I
think
those
are
those
are
in
my
mind.
What
I
expect
that
we
will
end
up
discussing,
but,
as
you
said,
one
of
the
higher-order
points
that
should
be
clear
from
here
is
of
the
safety
properties
are
not
getting
violated.
We
are
pushing
the
constants
around,
but
the
the
fundamental
mechanisms
are
still
the
same.
N
I
N
Because,
because
this
is
the
sender,
picking
the
value
of
energy,
doing
Taylors
probe,
one
of
the
deficiencies
in
TCP
was
we,
for
example,
can't
assume
that
the
other
side
is
doing
50
milliseconds
and
we
have
to
pick
a
worst-case
value
of
200
milliseconds.
So
this
might
be
a
useful
transport
parameter
to
negotiate
so
that
we
can
pick
the
value
I.
Would.
I
Agree,
that's
what
issue
912
is
actually
I
want.
I
get
a
feel
for
the
room,
because
this
is
like
one
of
those
things
where
I
would
very
much
like
to
move
forward
with
this
and
I.
Don't
think,
there's
a
lot
of
objection
but
I
think
for
once,
I
have
everyone
the
right
people
in
the
room
would
would
anyone
object
to
communicating
the
peers?
You
know
maximum
act
delay
in
transport
parameters
in
quick
center
reason.
We.
D
N
N
Informative
references,
but
not
we,
but
we
are
not
going
to
live
with
the
same
masts
and
shirts
that
we
have
in
DC
p.m.
in
DC,
be
in
our
sequin
581
of
physics
and
and
and
that's
part
of
the
reason.
Why
I
think
that
folks,
who
are
familiar
with
the
the
documents
relevant
documents,
should
be
engaged
in
the
sanitization
of
this
document
as
well.
N
That
said,
the
constants
here
are
different,
as
Ian
pointed
out
earlier.
Some
of
the
mechanisms
are
subtly
different
and
that
will
cause
some
constants
to
change.
We
are
also
chasing
perhaps
a
slightly
different
target,
which
is
latency,
and
so
some
of
the
constants
are
a
little
bit
more
aggressive
and,
as
you
pointed
out,
the
Baro
Naja's
from
the
specs,
but
also
from
current
practice.
So
this
may
be
something
that
can
feed
into
the
TCP
process
that
maybe
can
be
used
to
change
constants
that
we
use
for
TCP
as
well
at
the
IETF.
N
A
Lars
I
could
I
agree
that
the
conversation
should
happen
here,
but
we
have
a
lot
of
these
people
in
the
room.
That's
why
we
that's
why
we
specifically
said
we're
going
to
talk
about
this
stuff
here
and
not
another
and
where
that
might
not
be
the
case.
So
when
I
echo
praveen
suggestion
of
actually
documenting
the
differences
to
the
TCP
values
and
maybe
even
put
an
appendix
in
or
core
sort
of
describe
the
motivation
for
the
difference,
that's
right,
because
otherwise
we're
gonna
keep
getting
those
questions
over
and
over
so.
A
A
Doesn't
exist
in
the
RFC
stream
right?
Well,
it
does.
It
has
no
standards.
Well,
it's
an
expired
draft.
Believe
we
never
did
that.
Okay,
yeah!
Well,
so
information
recite
the
expired
draft,
but
so
it
is
what
I
think
sort
of
the
minister
sort
of
to
sever
constants
that
cholesterol
people
sort
of
know
about
right.
One
is
what
the
RFC
says
and
then
there's
what
Linux
does
that's.
N
A
Sure
I
mean
the
the
the
only
sets
of
values
that
are
easy
to
for
people
to
look
at
are
dope
from
the
open
source,
ones
right
and
I.
Don't
know
how
to
if
you're
willing
to
contribute
text
that
about
what
what
windows
might
be
doing
for
constant
and
that's
fine,
do
but
I
think
we're
running
in
the
danger
of
having
like
a
comparison
matrix
of
what
different
stacks
are
doing
in
this
talk
to
religious,
also,
probably
not
going
to
be
super
useful
yeah.
N
I
Seems
fine
to
me,
yeah
I
mean
I'd
be
happy
to
have
your
your
help,
Praveen.
If
you
wanted
to
contribute
some
to
that,
that's,
like
I,
said:
there's
a
small
section
there
that
sort
of
says
basically
like
most.
These
were
yanked,
but
from
Linux,
except
for
these
two
or
three,
which
is
pretty,
is
pretty
tersh,
so
I
mean
we
could
certainly
go
in
Tibetan
like
elaborate
a
little
bit
more.
If
it's
helpful
to
people
generally.
N
I
just
want
one
last
comment
on
the
understate
is
that
I
would
encourage
people
who
are
reviewing
this
interviewing.
The
constants
should
not
simply
look
at
the
constants,
because
the
mechanisms
can
be
subtly
different.
As
I
said
earlier,
the
safety
properties
are
are
not
violated,
but
the
mechanisms
are
definitely
in
in
general.
As
a
rule,
you
can
assume
that.
N
A
I'm
speaking
so,
I
also
got
some
feedback
that
people
are
confused
at
all.
The
other
drafts
are
at
over
seven,
and
this
is
at
Oh
6.
So
my
suggestion
would
be.
We
try
to
keep
the
base
draft
set
boy,
it's
another.
Seven.
It
showed
up
in
my
late
date.
Attract
aid
in
New
York
state
records
at
oh
six,
so
I'm
very
confused
I.
N
Was
a
question
about
the
max
height
delay
in
TCP,
M
yeah?
So
that's
a
deficiency
in
TCP
and
I
think
we
have
an
opportunity
here
to
kind
of
actually
explicitly
communicate
the
value,
so
we
can
do
the
right
thing
with
a
loss
probe.
The
other
question
I
had
was
this
default
RT
the
200
millisecond
handshake
timeout,
so
I
assume
that
things
like
caching,
the
initial
our
deity
in
the
path
and
using
that.
N
I
That
is
a
suggestion,
I
think.
Currently,
it
suggests
you
basically
like
cache
the
value
for
a
server
when
doing
0tt.
That's
not
quite
sufficient
for
some
circumstances,
because,
if
you're
on
a
very
long
r2t
link,
it
may
turn
out
that,
like
some
new
server,
you've
never
contacted
before
it,
like
also
has
a
very
long
RT,
so
I
mean
I,
don't
have
enough
experiment
like
implementation
experiments,
but
with
the
latter
part,
but
certainly
the
former
approach
of
just
saying.
Like
last
time,
I
went
to
WWE
example:
comm,
it
was
X.
P
I
D
So
Corey
first
I
think
the
algorithms
are
going
to
be
different
here,
because
there's
good
reasons
because
you're
mechanisms
are
different,
so
the
time
at
constants
are
going
to
be
different,
but
then
I
worry
about
whether
the
protocol
could
be
more
aggressive,
which
is
kind
of
repeating
I,
guess
what
the
previous
speaker
says,
but
we
need
some
way
to
get
a
handle
on
this.
Just
a
metric
saying
your
hundred
milliseconds,
it's
200
milliseconds
doesn't
help.
D
We
need
somehow
to
understand
how
it
will
compete
in
a
queue
and
in
particularly
in
really
strange
environments
where
you've
got
wireless
mangling
at
one
side
or
something
so
I.
Don't
think
this
is
an
easy
comparison,
but
if
it's
gonna
be
more
than
experimental,
then
somehow
we
have
to
figure
this
out.
So.
A
My
hope
is
that,
like
next
year,
we'll
have
or
early
next
year,
hopefully
we'll
have
implementations
that
you
can
actually
run
some
both
data
through
and
those
of
you
with
students
might
throw
some
at
this
and-
and
you
know,
get
some
papers
out
of
it
and
come
back
and
show
the
results
from
experiments
or
simulations.
Or
what
have
you
so
that
we
can
have
a
discussion?
Maybe
nice
ecog
or
something
on
on
creek
versus
tcp?
And
what
does
it
look
like
I,
like.
K
Hey
so
I
got
up
to
address
your
question.
Your
question
comment
about:
how
should
we
expose
my
settings
theater
side
on
during
the
handshake?
Your
what
my
constants
theory
side
sees
just.
K
Well,
like
yeah
like
once,
we
decided
to
do
one
other
might
be
communits
well,
but
let's
just
focus
on
this
one
arm,
the
specification
doesn't
say
you
have
to
do
that
they
act
at
all,
and
it
also
doesn't
say
you
have
to
have
this
implemented
by
doing
a
single
timer
that
pops
and
then
you
pass
in
the
acts.
So
one
thing
you
might
first
is
to
do,
is
you
might
like
consider?
K
You
know
you,
by
considering
the
number
of
packets
you've
already
received
you've,
not
yet
act
as
pressure
on
when
you
act
now,
I'm
not
saying
any
particular
thing.
You
should
do
I'm
just
saying
that
that
I'm
worried
about
codifying
the
notion
that
this
opinion
about
that
the
receivers
algorithm
in
the
protocol,
because
the
receiver
might
not
have
a
timer
right,
and
so
what
is
he
gonna
say
that
I
think.
I
I
I'm
like
you
can
set
the
timer
for
this,
and
if
it's
been
this
long,
like
you
probably
waited
long
enough,
that's
that's
all
I'm
really
looking
for
it
right
now
in
TCP,
most
implementations
I
believe
now
either
use
100
or
200,
and
yet,
like
I,
think
the
RFC
says
like
one
second
and
so
we're
basically
like
I
mean
if
you're
conservative
you
choose
one
second,
but
like
that's
crazy,
so
no
one
does,
and
so
they
choose
some
smaller
number
and
hope
it
works.
Sure.
Well,
I!
Guess
they
just
like.
K
Q
Brett
Jordan
I
apologize
for
my
ignorance
in
advance
but
like
to
shift
gears
a
little
bit
so
from
my
understanding
that
all
of
the
acts
for
quick
as
we
currently
have
them
defined,
will
all
be
inside
of
the
encrypted
tunnel.
That's
correct
and
so
I
am
interested
to
know
how
we
plan
on
enabling
the
network
operators
to
actually
help
when
things
go
wrong.
So
when
you
run
a
large
network,
you
may
not
own
either
end
like
the
client
or
the
application.
A
I
N
Jahangir
I
want
to
respond
to
something
Cory
said
earlier,
which
was
around
comparing
this
with
TCP
and
so
on.
I
think,
that's!
That's
it's
fantastic.
We
should
do
it
at
some
point
and
try
to
understand
in
more
detail
how
friendly
these
constants
are
in
tips.
Michael
said
something
about
how
you
know:
quicks
benefits
are
coming
from
these
constants
part
of
my
response
is
absurd.
N
Specifically,
if
you
look
at
loss
recovery
in
pointed
out
earlier
that
the
recovery
period
itself
is
actually
different
in
DCP
as
compared
to
quick-
and
this
happens
because
quick
does
not
reuse,
sequence
numbers
and
as
a
result,
you
get
past
the
you.
Never
you
do
not
have
the
notion
of
a
queue
mac
point
so.
N
N
Yeah,
actually,
okay,
so
there's
now
I
guess
the
most
recent
version
of
it.
It
has
some
text,
but
I
thought
I
was
looking,
did
not
get
published.
No,
no.
E
A
E
E
So
if
this
is
truly
valuable,
then
maybe
the
cost
is
not
particularly
high,
because
that
act
delay
is
actually
observe
of
all
on
the
network
anyway,
so
we
could
probably
say
not
a
problem,
but
for
the
general
class
of
things,
as
that
could
pointed
out
that
we
might
want
to
add
other
things,
we
have
to
think
about
think
carefully
about
what
we're
doing
with
it.
These
sorts
of
things
agreed.
I
I
think
we
should
consider
it
on
a
case-by-case
basis.
I
think
in
this
case,
I
think
I
did
25.
Milliseconds
is
actually
a
pretty
reasonable
default
for
the
public,
Internet
and
I
think
the
most
of
the
people
who
would
be
changing.
It
would
actually
be
cranking
it
down
for
data
center
environments,
yeah.
R
Mia
could
have
been
so
far
for
the
egg
delay.
It
might
actually
be
useful
to
have
a
mechanism
where
you
can
actually
change
it
later
on
as
well,
because
you
might
see
that
your
network
conditions
have
changed
somehow
and
you
want
to
adapt.
I
mean
it's
probably
it's
most
useful
to
have
it
right
from
the
beginning,
but
might
be
useful
yeah.
N
E
J
I
All
these
constants
are,
as
I
said,
kind
of
based
on
like
the
Linux
numbers,
but
like
all
the
RTT
measurements
in
quick,
are
designed
to
be
microsecond
accuracy
and
when
we
did
have
timestamps,
they
were
also
microsecond
accuracy,
though
now
they're
gone
so
but
yeah
definitely
I.
Think
microseconds
is
is
the
goal
you
know
and
all
these
circumstances.
N
K
Yeah
I
mean
our
now
we're
really
bike
shedding,
but
the
question
and
the
only
distinguishing
microseconds
in
milliseconds,
if
you're,
encoding
on
the
wire,
so
I
guess.
The
point
is
that
you're
gonna
encode
maximum
acti
on
the
wire
you
might
with
units
microseconds
giving
you
one
send
at
once
anyway,.
S
I
Crap
I
totally
had
one
more
slide:
yeah
implementation
tips,
so
these
are
things
that
Google
quick
has
messed
up
at
some
point.
So,
like
don't
do
these
spurious
retransmissions
happen,
you
should
really
continue
to
track
the
data
the
sent
to
the
lost
packets
for
at
least
a
little
while
after
they're
lost,
you
don't
have
to
track
it
forever,
but
like
at
least
some
small
period
of
time,
because
you
know
weird
twiddles
and
such
do
happen.
I
You
get
one
spurious
act
and
one
packet
that
like
Greece's
ahead
of
others
and
declares
50
lost
and
bad
things
happen.
Never
ever
ever
retransmit
an
acknowledgement
frame
always
send
up-to-date
ones.
If
you
don't
do
that,
like
your
world,
you'll
be
in
a
world
of
pain
and
like
loss,
recovery
will
work
like
absolute
crap,
always
send
packets
in
packet
number
order,
or
you
will
either
end
up
with
spurious
retransmissions
or
your
implementation
will
be
stupidly
complex.
Please
always
send
them
in
packet
number
all
order
it.
I
B
Thank
you
in
so
we're
doing
fine
on
time,
I
think
it's
10:30,
we've
got
one
more
presentation
and
then
we've
got
the
discussion
of
the
third
implementation
draft,
so
I
think
we're
fine
and,
as
I
think
was
mentioned,
we're
hoping
to
see
more
implementation
of
this
we'll
discuss
that
on
the
third
implementation
draft
timeline,
but
then
we'll
have
running
code
to
get
our
data
from
which
is
nice
and
and
draft
seven
is
indeed
now
up,
which
is
great.
Yes,
Bruno.
N
Sorry
one
more
question
on
the
previous
draft:
the
very
first
slide
said
that
the
draft
does
not
recommend
what
percent
I
had
a
brief
comment
on
that.
You
were
going
to
explain
the
philosophy
of
why
that
is
so.
It
might
be
still
useful
to
suggest
a
particular
retransmission
strategy
in
the
draft,
so.
I
I
think
the
I
think
the
transport
draft
has
some
fairly
like
wishy-washy
text
about
suggesting
that,
if,
if
all
things
being
equal,
you
should
send
you
know
retransmissions
versus
new
data,
because
you
know
the
application
is
trying
to
make
forward
progress
and
it
reduces
the
odds
of
like
flow
control.
Blockage
I,
you
know
I,
think
I,
think
there
are
some
considerations
in
the
draft
about
that
as
well.
But
I'd
have
to
reread
it,
but
the
idea
is
just
that:
there's
not
like
normative
language
like
like
you
must
read.
I
S
E
N
General
add
that
there's
there's
also
the
discussion
we
had
earlier
before
when
the
draft
was
named
early
on
about
the
possibility
of
later
on
incorporating
a
partial
reliability,
and
so
this
was
really
more
about
detection
than
about
actually
the
cover
II.
So
the
the
draft
explicitly
talks
about
detecting
losses,
not
about
actually
transmitting.
A
N
O
A
T
B
N
N
Well,
thank
you
for
being
here.
I'm
gonna
discuss
connection
migration
as
it
is
in
the
draft
and
sort
of
where
we
want
to
go
with
this
in
the
in
changing
the
text
in
the
draft
and
sort
of
moving
forward
with
mechanisms
that
are
more
resilient,
that
are
stronger
and
more
useful.
We
aren't
trying
I'm
not
trying
to
nail
down
the
all
the
details
in
this
discussion.
This
is
really
a
higher-level
discussion.
N
This
is
meant
to
get
feedback
from
the
community,
about
direction
and
about
principles
so,
to
the
extent
that
I've
been
able
to
I've
tried
to
pull
some
principles
out.
If
you
think
there
are
others
that
you,
you
think
should
be
considered
or
if
you
think
these
principles
are
wrong.
This
would
be
a
good
time
to
talk
about
them,
because
we
expect
to
continue
working
on
this,
and
this
is
initial,
an
initial
sort
of
sense
of
the
direction
that
we
are
heading
in
next
slide.
N
When
a
client
moves
from
one
addressed
one
of
them,
it
basically
sends
data
over
the
new
address.
The
observes
a
packet
arrived
from
a
new
address
on
the
same
connection,
and
it
sends
subsequent
packets
to
this
new
address.
This
is
effectively
what
I'll
call
latching
behavior,
where
the
server
latches
to
a
new
peer
address.
N
This
is
the
basic
mechanism
that
we
have
right
now.
This,
of
course,
assumes
that
the
there's
only
one
address
available
at
any
given
time
in
the
client.
This
works
well
for
a
couple
of
models.
It
works
well,
basically,
when
you've
lost
connectivity
on
one
network
and
you
get
connectivity,
and
this
is
the
make
after
break
a
situation.
It
also
works
well
for
inadvertent
migrations
or
it
works
well
for
not
rebinding
where
the
old
address
is
basically
gone
completely.
The
problem
with
this
approach,
of
course,
is
that
it
allows
for
some
it.
N
It
keeps
you
from
being
able
to
do
proactive
migration
when
you
have
multiple
addresses
available,
multiple
networks
available
at
the
client-
and
it
also
is
actually
there's
there's
all
you
need
is
one
spoofed
address
one
spoof
packet
to
go
from
the
client
to
the
server
and
the
server
latches
onto
it
immediately
and
then
tries
to
validate
it.
It
doesn't
try
to
verify
the
address
first
before
starting
to
use
it,
so
this
model
obviously
has
issues
and
I'm
going
to
try
and
propose
some
ways
forward
next
slide.
N
Specifically,
let's
just
go
with
cell
and
Wi-Fi,
because
that's
the
most
common
use
case
at
the
moment,
and
that's
the
most
common
case
that
we
expect
to
get
used
in
the
short
term,
at
least
and
in
in
some
cases.
Definitely
in
this
case
clients
tend
to
have
a
preference.
They
tend
to
have
a
local
policy
that
says
you
prefer
Wi-Fi
over
sell.
N
This
is
common
of
all
the
platforms
I
know
of,
but
in
general,
it's
possible
and
it's
likely
that
the
client
will
have
a
preference
for
a
particular
network,
even
when
two
are
available
so
in
that
world,
when
migration
happens
with
with
with
multiple
addresses
in
this
world,
I'll
draw
two
scenarios,
and
these
are
again
examples
scenarios,
but
these
capture
a
family
of
cases.
The
first
one
is
when
a
mobile
client,
for
instance,
connects
to
Wi-Fi.
This
is
when
the
mobile
client
is
already
on
using
cellular
data
and
has
migrated
into
Wi-Fi.
N
So
this
is
you
walking
into
your
house
or
walking
into
your
office
from
outside,
for
instance,
or
walking
into
a
place
where
you
have
Wi-Fi
access
and
the
and
the
mobile
client
only
has
Wi-Fi
available.
What
you
want
to
do
is
to
migrate
existing
connections
that
are
running
over
cell
over
to
Wi-Fi,
because
there's
a
preference
for
Wi-Fi,
but
you
only
want
to
do
this.
N
If
Wi-Fi
is
actually,
you
can
actually
reach
the
peer
via
Wi-Fi,
for
instance,
if
you
have
Wi-Fi
connectivity,
but
if
there's
UDP
blockage
or
god
forbid,
quick
blockage
that
actually
happens
on
the
Wi-Fi
network,
then
you
won't
be
able
to
speak
to
the
peer
over
this
and
you
don't
want
to
discontinue
using
cellular
under
these
conditions.
So
Wi-Fi
is
a
preference,
but
you
want
to
continue
using
cell
if
Wi-Fi
is
in
fact
not
usable
for
you.
For
this
connection.
N
The
second
use
case
is
when
Wi-Fi
quality
degrades
through
the
second
scenario.
At
all,
the
this
is
the
infamous
parking
lot
problem.
This
is
when
you
have
a
phone
and
you
are
walking
out
from
again
your
office
or
your
home
and
you're
walking
into
the
parking
lot,
and
you
are
trying
to
do
some
work
on
the
phone
or
you're,
trying
to
look
up
directions
and
you're
walking
out
of
the
parking
lot,
which
means
Wi-Fi
quality
is
degrading,
but
your
phone
will
not
let
go
of
Wi-Fi,
and
this
is
platform
decisions
that
are
generally
Universal
platforms.
N
In
this
condition,
under
these
conditions,
you
want
to
be
able
to
migrate
existing
connections
from
Wi-Fi
to
cell
again.
This
is
given
that
the
client
and
the
application
have
privileges
to
do
so,
but
because
of
the
preference
for
Wi-Fi,
you
want
to
be
able
to
migrate
back
to
it
if
Wi-Fi
in
fact
gets
better.
So
this
is,
if
you
were
using
Wi-Fi
and
return
on
the
microwave
and
Wi-Fi
blacks
out
for
like
two
seconds,
you
want
to
come
back
to
Wi-Fi
when
that
network
is
back.
N
Currently,
as
I
said
this,
all
we
have
is
a
mechanism
to
latch
on
to
a
new
network
latch
on
to
a
new
path
and
H
on
to
a
new
address
and
send
all
data
over
that
path.
But
really
what
we
would
like
here
is
the
ability
to
send
a
probe,
in
addition
to
sending
data.
So
you
are
sending
data
over
a
path
that
you
know
and
you
love,
and
you
want
to
try
a
new
path
for
whatever
reason
and
you
send
a
probe
packet
and
you
then
respond
to
it.
N
So
that's
sort
of
one
of
the
principles
here
is
that
we
want
to
be
able
to
make
a
probing
and
latching
separable
events
and
that's
something
I'm
going
to
be
working
with
on
the
on
the
next
one.
What
this
means
is
that
appear
when
it
receives,
so
this
is
not
only
when
I
offer
for
an
endpoint
to
send
a
pro
packet
in
addition
to
data
packets,
but
when
a
when
appear
in
this
case,
as
I
said
with
planned
migration,
the
server
receives
a
pro
packet.
N
N
G
Probing
in
a
lot
of
the
cases
in
devices
which
combine
cellular
and
Wi-Fi
interfaces,
when
you
have
a
Wi-Fi
interface,
the
data
channel
is
not
available
on
the
cellular
interface
or
an
existing
connection,
and
so
in
those
cases,
how
would
you
craft
a
probe
such
that
it
causes
the
data
channel
to
actually
come
up?
Oh
you
mean
the
senator.
N
G
You
have
signal
and
you're
attached,
but
there's
no
data
channel
right,
so
you
so
yeah,
and
so
you,
you
may
have
a
situation
here
where,
in
order
to
cause
the
data
channel
to
come
up,
you
you
may
be
reaching
much
further
into
the
OS
stack
than
you
think
you
are
at
the
transport
itself
may
not
be
able
to
do
that.
Yes,
on
what
the
knobs
are
and
in
like,
certainly
in
in
in
Android
six
yeah.
N
N
N
G
Even
if
you
have
these
other
conditions,
and
so
what
what
worries
me
about
that
is,
if
you
actually
did
want
to
do,
make
before
break,
you
actually
need
a
facility
from
the
OS
that
that
says,
hey
I
need
a
data
channel.
For
this,
which
may
mean
you
mean
you
may
actually
have
to
change
some
OS
level
from.
O
G
N
U
Tommy
Pali
Apple,
just
to
speak
from
kind
of
by
our
implementation
side,
not
on
the
Android
side.
Yeah
I
think
this
would
work.
Just
fine,
I.
Think
one
of
the
benefits
here
of
quick
being
often
implemented
in
user
space
is
that
there's
a
little
bit
more
ability
to
play
with
knobs
and
poke
things
like
getting
the
radio
up
that
has
maybe
been
traditionally
harder
within
a
TCP
stack
within
the
kernel
to
interact
with
the
cellular
interface
that
way
oftentimes
what
we
will
do
when
we're
doing
path
evaluation.
L
Mr.
Dawkins
I'm
not
quite
sure
what
ahead
I'm
wearing,
but
let
me
say
it
you
could
tell
me
which
one
I
should
have
been
wearing
I'm
asking
a
question
that
I
think
is
about
90%
rotation
from
Tears
question,
which
is
how
do
you?
How
stable
do
you
think
the
ordering
your
preferences
would
be?
Is
that
something
would
be
like
a
natural
law
or
if
it's
not?
If
it's
not
a
natural
law?
Are
you
thinking
you
know?
Is
there
anything
we
need
to
say
about
the
way
that
somebody
somewhere
can
express
those
preferences?
N
N
I
think
what
you're
trying
to
say
here
is
that
when
additional
interfaces
are
available,
like
you
know
that
there
is
layer,
three
connectivity
is
when
you
probe
versus
saying
that
you
explicitly
require
that
quick
be
able
to
kick
up
the
interface,
so
it
could
be
a
reactive
behavior
versus
proactive
depending
on
the
system
policy,
which
is
this
quick
dictating
when
it
brings
up
and
tear,
is
not
an
interface,
because
you
could,
for
example,
kick
the
network.
We
could
bring
the
interface
up
and
assume
that
you
have
sent
the
probe,
and
everything
is
good.
N
But
later
when
you
kick
the
interface
again,
the
addressing
information
may
have
changed,
so
just
the
fact
that
your
probe
may
not
survive
so
so
you
make
an
interesting
point
and
I
did
think
about
that.
So
the
what
you're
talking
about
is
whether
there
should
be
you,
as
you
mentioned,
we
should
proactively
probe
and
keep
paths
around
for
fast
switchover
to
or
whether
we
should
kick
a
path
and
send
a
probe
only
when
we
are
actually
experiencing
degradation
or
failure
or
whatever
have
you.
The
SC
DB
model
for
this
was
very
much
negotiate.
V
Air
Klein,
Android
I'm,
not
quite
sure
what
Ted's
Ted's
point
was
I,
do
think
this
is
a
implementation
issue,
but
I
also
think
you
should.
We
should
not
really
necessarily
speak
in
terms
of
interfaces
but
addresses
because
everything
here
should
work
identically
for
rotating
the
connection
across
multiple
ipv6
privacy
addresses
on
the
same
interface.
V
U
Tommy
Polly
Apple
I,
just
want
to
reiterate
I
think
you
agree
with
this.
Just
say
that,
like
the
document
should
be
fairly
agnostic
to
what
the
system
policy
is
and
should
be
described,
the
mechanisms,
but
both
in
terms
of
what
preferences
we
have
that's,
definitely
going
to
be
determined
by
the
system.
Other
properties,
but
also
the
aggressiveness
of
switching
over
I
mean,
for
example,
when
we
have
preferences
right
now
for
doing
MP,
TCP
we've
actually
exposed
in
kind
of
higher-level
api's.
U
Does
the
application
have
a
preference
to
do
essentially
a
failover
mode,
or
does
it
want
to
have
like
interactive
mode
in
which
you
kind
of
keep
things
up
so
both
should
be
possible
here.
This
relates
to
some
of
the
work
that
we're
doing
in
taps
about
how
do
you
specify
application
preference
for
your
resilience
on
this
type
of
link
so
being
compatible?
That
would
be
good.
That's.
N
Actually
very
useful
feedback.
Thank
you
all
right.
Moving
along
this
is
the
the
moving
on
from
the
latching
and
the
and
the
lashing
inseparable
from
data.
In
this
example,
at
least,
you
were
talking
about
the
mobile
client
being
able
to
decide
when
to
use
Wi-Fi
when
to
use.
This
certainly
seem
tends
to
be
the
case
in
for
Wi-Fi
and
cell,
where
the
client
is
very
clearly
in
charge
of
deciding
whether
it's
okay
to
use
cellular
cell,
for
this
particular
request
for
this
particular
stream.
N
For
this
particular
connection
or
not,
and
even
if
cellular
data
is
available,
the
client
may
not
want
to
use
it
for
this
particular
transaction,
not
for
the
for
this
particular
stream,
and
that's
something
that
you
want
the
client
to
be
able
to
decide.
This
is
a
policy
decision
as
Tongass
pointing
out
based
on
various
preferences.
The
policy
decision
can
play
out
in
a
number
of
ways,
but
you
want
this
to
be
a
client
local
decision.
N
We
allow
for
the
performance
argument
to
happen.
So
the
principles
is
the
principle
that
came
out,
which
is
the
principle
that
came
out
of
this
phony
was
that
we
want
interface,
I'll
use
interface
here
again
instead
of
address,
but
I
just
use
is
a
local
policy
decision
at
the
client
and
when
it's
possible,
we
want
the
mechanism
to
support
the
server's
ability
to
choose.
So
this
could
be
useful
in
a
case
where
the
client
says
I,
don't
care
which
interface
or
which
network
is
used.
N
Next
slide,
so
here's
a
strawman
with
sort
of
with
bearing
in
mind
these
two
principles
and
the
discussion
that
that
we
had
in
the
slide
so
far
the
same
two
scenarios
that
I
used
at
the
beginning,
I'll
walk
through
them
I
will
walk
through
particular
way
of
doing
them,
with
these
principles
in
mind
and
and
the
details
of
obviously
can
change
as
we
go
through
it.
But
this
is
a
straw
man
when
the
mobile
client
connects
to
Wi-Fi,
which
is
when
your
mobile
a
client
is
coming
from
silver
to
Wi-Fi.
N
It
finds
Wi-Fi
as
a
network.
That's
available
sends
a
pro
packet
over
it
and
continue
sending
data
over
the
cellular
network.
The
server
receives
this
probe
and
sends
an
ACK
of
it
to
the
source
address
of
the
probe.
Basically,
what
we
want
is
for
the
server
to
return
respond
to
the
probe
over
the
cellular
network,
sorry
over
the
Wi-Fi
network,
but
we
want
the
server
to
keep
sending
data
over
the
cellular
network
when
the
client
receives
confirmation
that
this
path
in
fact
works.
N
E
I
wanted
to
get
through
the
example
to
Tommy's
point
earlier.
I
think
this
is
just
one
of
the
many
approaches
you
might
take
to
to
doing
this
right.
So
the
example
that
we
have
in
the
in
this
sort
of
what's
documented
right
now
and
then
in
the
in
the
draft,
is
a
potential
policy
that
someone
could
apply
here,
which
is
Oh.
E
Wi-Fi
is
available,
drop
the
cellular
go
straight
to
the
Wi-Fi,
its
abrupt.
It
breaks
things
it's
going
to
take
some
while
at
some
time
before
things
get
working.
This
is
a
reasonable
policy
also,
if
you're
actually
concerned
about
throughput
during
this
process.
This
doesn't
allow
you
necessarily
to
open
the
Seawind
on
the
new
path,
particularly
well,
and
so
you
might
leave
your.
E
This
particular
design,
assuming
that
you
reset
your
congestion
controller
and
for
the
the
new
network
and
new
path,
won't
allow
you
to
have
the
same
Seawind
that
you
previously
had.
So
you
won't
necessarily
get
the
same
sort
of
throughput
that
you
would
have
had
on
the
cellular
connections.
So
it
is
possible
that
someone
keeps
the
cellular
connection
open
for
a
little
longer,
just
so
that
the
Wi-Fi
sea
wind
can
come
open.
I
think
all
of
these
things
are
possible,
so
I
think
Lee
just
sort
of
put
that
out
there.
K
Hi
Johnny
can
what's
your
throttle,
offer
for
this
design.
N
Stare
are
two
things
in
there.
One,
you
didn't
want
the
server
to
start
sending
to
an
arbitrary
IP
address.
Yes,
because
some
clients
put
somebody
else's
IP
address
and
now
that
that
client,
that
could
be
the
client
or
that
could
be
a
third
party
attacker.
So
if
a
third
party
wrecker
sends
an
arbitrary
packet
with
the
with
a
spoon
type,
eh
shouldn't
go,
the
server
shouldn't
start
lashing
and
sending
somewhere
else.
Similarly,
the
client
wants
to
drown.
Somebody
else
and
data
even
want
a
DOS
attack
possible
on
an
amplification
attack
possible
with
this
mechanism.
Okay,.
W
K
E
K
K
Can
I
divert
you
in
another
network,
cause
things
actually
be
torn
down,
for
instance,
yes,
and
so
well,
that's
one
thing
you
might
want
to
do,
but
I
mean
so
it's
like
this
is
all
you
have
here
right
if
I
have,
if
I
have,
even
if
I
have,
if
I
have
essentially
access
to,
if
I
can
basically
get
one
packet
out
of
the.
Why
yeah
well.
N
So
I'll,
that's
that's
a
valid
point.
I
I
think
that's
I,
miss
saying
here
that
the
server
needs
to
validate
the
client
as
well,
and
that's
something
that
we
want
to
do
not
at
the
point
of
the
the
probing
packet.
This
is.
This
is
going
to
end
up
being
actually
a
different
type
of
packet.
It's
not
gonna,
be
a
simple
ping.
Pong
frame
and
I'll
talk
about
that
in
a
moment,
but
but
yeah
at
the
moment.
G
Saturday
I
got
a
when
Martin
was
talking
earlier,
because
he
was
trying
to
point
out
to
you
that
the
current
make
after
break
model
may
be
a
policy
of
the
of
the
in
system
as
one
of
the
potential
policies.
But
I
would
like
to
point
out
that
one
of
the
things
the
the
first
mechanism
was
designed
to
handle
was
not
rebinding,
yes
right
and
so
that
that's
a
it's,
not
really
a
policy
be
in
session,
but
that
will
continue
to
happen
in
the
presence
of
this
other
mechanism
and
so
a
really
basic
question.
X
X
You
have
a
we
had
a
reasonably
aggressive
schedule
for
retransmitting
packets
if
you're
switching
interface
is
you're,
going
to
see
potentially
div
quite
different,
RT
T's,
which
may
make
you
make
reach
stuff
retransmits
stuff
even
but
this
is
since
this
is
a
local
policy
decision.
I
as
a
sender,
know
that
I'm
going
to
change
interfaces
where
I
may
experience
different
are
two
T's,
which
means
since
I
know
this
I'm
trying
something
new
here,
maybe
I
shouldn't
should
be
reconsidering
my
retransmission
schedule,
if
I'm
trying
to
figure
out
whether
something
got
lost
or
not.
Yes,.
N
N
G
U
Poly
Apple
yeah
I
agree
with
that
point
on
the
point.
So
what
we're
looking
at
the
threat
model
and
other
things
just
I
wanted
to
bring
up
a
suggestion
of
another
prior
art
to
look
at
in
this
area
for
an
encrypted
protocol
that
goes
over
UDP
that
it's
mobile
between
interfaces
that
we've
already
analyzed,
which
is
mobike
for
Ike
v2,
so
mobile
I
can
so
they
can
essentially
do
this
for
an
I
accession,
and
so
that
is
dealing
with
encrypted
packets.
There's
explicit
handling
of
NAT
rebinding
in
this.
U
M
Xavier
mojo
I
was
thinking
about
possible
consequences
for
other
applications
on
the
same
device.
Do
you
see
the
quick
application
as
being
able
to
activate
an
interface
or
only
as
I
would
say
as
slave
applications?
That
would
react
accordingly,
for
example,
if
the
two
interfaces
are
already
in
activating
so.
N
This
again
sounds
like
an
implementation
question
to
me
about
exactly
how
these
mechanisms
are
in
fact
used.
The
goal
here
is
to
identify
principles
and
sort
of
higher
order,
questions
that
we
can
use
to
design
mechanisms
that
applications
can
then
use,
but
I
think
work
to
correct
me
if
I'm
wrong,
but
it
sounds
to
me
like
you're
what
you're
asking
is
a
very
specific
use
case:
question
of
implementations,
how
they
might
actually
use
multiple
interfaces.
N
E
N
Let
me
try
to
do
that,
so
the
first
half
I,
just
I,
walked
over
what
what
happens
on
a
mobile
client
connects
to
Wi-Fi
from
from
cellular.
If
you
remember,
I
also
mentioned
the
parking-lot
problem,
where
the
user's
walking
away
from
Wi-Fi
into
seller
and
the
Wi-Fi
network
is
degrading,
and
what
you
want
to
be
able
to.
N
What
you
want
to
do
here
is
what,
in
in
the
case
that
it
client
app,
is
aggressively
move
data
over
to
to
sell
move
data
over
to
sell
but
keep
sending
probes
over
Wi-Fi
when
Wi-Fi
comes
back
up,
switched
the
data
back
to
due
to
Wi-Fi,
because
Wi-Fi
is
again
the
preferred
network.
This
is
exactly
a
CD
piece,
failover
model
by
the
way,
but
that's
that's
basically
what
this
covers
next
slide.
Okay,.
B
Y
So
this
question
is
not
exactly
tied
to
the
specific
slide,
but
it's
probably
worth
saying
in
general.
So
where
does
connection
migration
fit
into
the
role
into
the
work
of
the
working
group,
like?
Is
this
a
focus
area?
Is
it
in
scope
for
version
one
you
know?
Do
we
need
to
talk
about
it
now
at
all
I.
B
P
C
B
Z
Rick
Taylor,
I
I
think
this
leveling
point:
I
I
missed
half
the
conversation
there,
but
I
think
is
I.
Think
we're
disappearing
quite
deep
into
the
weeds
here.
I
think
the
role
of
quick
is
to
understand
that
it
may
have
multiple
addresses
and
bearers
may
come
and
go,
but
how
that
happens
is
not
quick
responsibility.
Quick
should
be
able
to
cope
with
this
occurring,
yes
and
how
that
happens
is
totally
beyond
scope.
Yes,
just
the
mechanism
to
react
that
great
so
just
to
ground.
B
B
N
C
Can
we
go
back
to
the
Charter
real,
quick
sure,
so
I
wanted
to
point
out
that
we're
we're
really
creeping
into
designing
a
multipath
transport
protocol
here
like
and
I
want
to
make
sure
we
don't
do
that
accidentally,
because
so
all
of
the
all
of
the
policies
that
I've
seen
here
like
there's
only
one
little
tiny
tweak
that
you
have
to
make
from
okay,
now
I
have
to
up
or
I,
have
three
up
or
I
have
n
in
things
up,
because
I'm
using
them
for
probing
and
it's
a
line
of
code
to
send
data
over
that.
C
A
I
actually
want
to
well
echo
Brian's
point
us
and
said
so.
Some
of
the
things
you
need
to
do
to
handle
this
case
are
also
things
you
need
to
do
for
multi-part.
It
would
be
awful
if
the
things
we
did
from
other
paths
were
completely
different
from
what
Laurie.
N
Let
me
let
me
let
me
articulate
this,
perhaps
in
a
slightly
different
way.
Multipath
uses
multiple
addresses
simultaneously
for
sending
data
for
sending
everything.
What
that
means
is
that
the
sender
has
to
maintain
multiple
congestion
control,
contexts,
multiple
loss,
recovery,
contexts,
multiple
everything
congested
when
you
have
multipath
connection
migration
is
always
going
to
be
a
special
case
of
it.
The
question
here
is:
how
do
you
get
connection
migration
without
having
to
implement
the
entire
giant
multi
path
thing
and
it's
entirely
possible?
You
have
to
be
able
to
do
so.
N
This
is
again
for
people
who
are
familiar
with
how
a
CDP
did
things
there
was.
There
was
a
failover
thing
in
a
city,
and
then
there
was
the
multi-part
thing
in
a
CDP,
and
this
is
very
much
that
divided.
The
feel
of
it
is
incredibly
useful.
I'm
actually
gonna
argue
for
having
this
in
v1,
because
I
believe
that
we
want
to
have
this
as
a
fundamental
property
of
the
protocol
that
we
use,
but.
S
S
So
this
may
be
similar
to
the
previous
comment,
but
no
I
would
like
to
so.
This
is
very
similar
to
what
is
VP
is
doing
in
for
the
address
management,
so
I'd
like
to
understand:
what's
the
difference
between
the
SCT
address
management
and
quick,
and
then
if
there
is
a
difference,
I
don't
under
I've
direct
understand
why
it's
different
and
if
you
could
clarify
this
point,
that
would
be
great.
So.
N
The
one
difference
is
that
a
CDP
has
exactly
I
can't
remember
who
asked
this
question,
but
somebody
else
Praveen
has
this
question
of.
If
a
client
can
actually
expose
all
of
its
addresses
to
the
other
side
and
the
other
side
uses
it
whenever
it
wants,
that's
the
Sadd
model,
the
quick
model
is,
you
have
to
send
a
packet
over
an
address
over
an
interface
over
a
network
for
the
other
side
to
be
able
to
use
it.
S
N
N
So
bear
in
mind
that
connection
migration
only
kicks
in
when
you
already
have
a
connection,
that's
roaming,
so
this
is
not
applicable
during
handshake.
I
actually
had
a
bullet
I
to
move
it
because
I
didn't
want
to
proactively,
say
this
and
then
get
everybody
at
the
mic.
But
yes,
we
don't
want
to
support
this
during
handshake.
This
is
going
to
be
after
the
handshake
is
completed,
which
means
that
you
have
a
running
connection.
S
A
E
A
Actually
talking
on
topic-
and
not
just
you
know:
okay,
so
yeah
I'll
go
to
get
tickets.
E
Whoever
it
was
that
caused
us
to
uplevel,
that's
great,
given
that
we're
trying
to
be
aggressive
about
these
sorts
of
things,
I'm
inclined
not
to
do
this,
despite
the
fact
that
I
actually
think
that
Jonah's
design
here
is
fundamentally
correct
and
it's
something
that
would
possibly
work
really
well
and
I
have
some
great
advantages:
the
probe
packet
in
there
and
the
that
notion
actually
kind
of
exists
already
in
the
current
design.
It's
just
not
very
good.
It's
kind
of
and
and
kind
of
may
just
be
enough
for
us
right
now.
E
N
E
If
you
actually
sit
down
and
understand
the
design
go
through
in
detail,
you
can
send
a
probe
on
a
separate
path
and
sending
an
additional
packet
on
the
original
path
will
cause
the
other
path
to
latch,
but
only
only
momentarily
so
you'll
get
packets,
traversing
both
paths
for
for
a
period
of
time.
It's
not
good,
it's
not
as
good
as
this,
but
it
means
that
we,
we
don't
get
into
all
of
the
details
of
this
and
spend
time
on
that,
and
we
can
spend
our
time,
maybe
productively
doing
say,
had
a
compression
frenzy.
A
But
you
saw
one
Nate.
The
thing
we
talked
about
is
actually
I.
Think
when
the
Charter
text
was
written.
There
was
an
assumption
that
the
working
group
on
the
side
would
have
already
spent
some
time
talking
about
multipathing
and
multihoming,
but
we
haven't
had
that
time,
and
so
the
idea
was
where
we
would
have
thought
about
this
already
when
we
would
break
off
the
stuff
this
and
it
stood
well
enough
and
stick
it
here,
but
I
don't
think
we
have
done
that
and
I
think
you're.
A
Getting
to
the
point
that
do
we
want
to
do
this
before
v1
or
or
what's
the
minimal
thing
we
can
do
before
we
one
that
lets
us
do
something
that
isn't
gonna
be
terrible
later
on.
So.
N
N
Some
of
these
things
to
actually
make
that
work.
I
understand
your
point
that
you
can.
Yes,
the
latching
point
I'll
grant,
but
you
need
to
probe
the
alternate
path.
You
need
to
actually
be
able
to
explicitly
send
a
pro
packet
that
and
you
need
to
be
able
to
deal
with
the
the
consequences
of
sending
a
packet
there
and
coming
back,
because
you
don't
want
the
congestion
controller
to
get
reset
twice.
That's.
N
No
I
know
so,
but
so
you
have
to
lay.
Let's
start,
the
packet
number
for
the
probe
packet
is
going
to
be
part
of
your
same
sequence
base.
So
these
are
the
details
that
you're
gonna
still
have
to
to
work
out
and
once
you
work
out
the
details,
this
is
what
you
get
so
the.
AA
B
B
N
B
E
AB
Hi
so
well,
I
think
that
protocol
has
a
really
different
from
Allah
cheering
and
it
is
something
that
is
already
been
deployed,
especially
for
VPN,
so
I
think
using
this
alternate
pass
is
a
this
is
already
something
we
have
experience
in,
and
it's
something
doable
and
I
kind
of
support.
The
support
having
this
alternate
that
we
need
two
probing
or
not
can
be
solved.
U
Tommy
poly
Apple,
so
Martin.
I
I
disagree
about
kind
of
the
approach
on
this,
so
absolutely
the
full
multipath.
You
said
this
is
not
full
multipath,
there's
a
lot
more.
There
I
get
that
being
out
of
v1
I.
Think
anyone
who
is
interested
in
multi
paths
should
review
this
to
make
sure
it
is
not
conflicting
I
believe
it
is
not.
This
is
the
control
part
of
multipath,
but
nothing
more.
U
U
This
I
don't
think
adds
that
much
complexity,
I
think
it
is
again
well
understood
in
other
protocols.
It's
a
common
model
and
if
we're
going
to
put
anything
in,
let's
just
do
this
essentially
the
existing
model.
Is
there
like
the
only
argument
for
it
right
now,
it
seems
to
be
is
just
inertia
that
it
was
already
written
and
it
hadn't
been
written.
This
way,
originally,
it
wouldn't
be
a
problem.
U
B
So
I'm
gonna
cut
the
line
the
queue
at
this
point
because
we
are
a
little
time
constrained
and
again,
just
to
reiterate,
you
know
from
from
my
perspective
at
least
I'm
listening
for
why
we
need
to
do
this
in
version
one
for
for
the
use
cases
we've
talked
about
or
why
it's
something
that
needs
to
be
the
invariance
that
that's
what
I'm
still
listening
here
not
generally,
is
this
good
or
not?
Can.
N
E
Yeah
so
Thompson,
if
we're
gonna,
spend
this
long
arguing
about
whether
we
do
it,
we
should
probably
just
argue
about
how
to
do
it
correctly.
My
my
point
was
that
we
can
perhaps
save
some
effort
in
time
and,
if
it
seems
like
people
are
interested
in
the
feature
to
the
extent
that
we're
going
to
spend
more
time
fending
off
the
requests
for
the
feature
than
we
are
in
actually
implementing
the
feature,
then
I'm
perfectly
okay,
with
implementing
the
feature.
I
was
just
raising
the
raising
the
possibility.
R
You
could
have
entered
I
think
this
is
an
important
use
case,
which
will
make
deployment
much
easier
and
more
beneficial,
and
the
other
point
is
this
is
like
halfway
going
to
Mahdi
pass
right.
So
if
we
add
this
right
now
in
the
first
version
it
will
be
much
easier
to
design
magic
pass
later,
instead
of
redesigning
a
lot
of
things
again.
So
if
you
just
think
about
how
would
you
design
quick
with
multi
pass
in
your
head
to
edit
later?
R
This
would
be
the
right
direction
to
go
and
one
completely
different
point,
because
there
was
a
comment
about
middleboxes
and
nets
say
this
is
something
else
who
might
want
to
add
like
requesting
the
other
side
to
send
you
a
probe.
Maybe
sin
like
turn
the
other
side
simply
sent
me
a
probe
to
this
IP
address
not
sure
just
proposal.
AA
Yeah
Brian
Ford
EPFL,
just
to
add
to
the
you
know:
migration
versus
multipath
see
I'm
I'm
concerned
I
feel
like
a
lot
of
both
mechanism,
stuff
and
policy.
Stuff
feels
like
it's
getting
kind
of
lumped
together
here.
At
least
it
I.
Don't
see
the
distinction
being
very
clear
either
here
or
in
the
current
draft
document.
I
feel
like
the
mechanism
needed
for
both
migration
and
multi-cat
path
should
be
practically
the
same.
You
know
packet
by
the
you
know,
packet
formats,
methods
of
you
know,
being
able
to
probe
figure
out.
AA
If
you
have-
and
you
know
can
potentially
use
particular
addresses
versus
the
policy,
and
you
know
that
so
the
mechanism
should
be
identical
across.
You
know:
migration
and
multipath,
whereas
the
main
differences
in
policy
right,
you
know,
a
migration
is
a
policy
that
you
know
is
a
type
of
multipath
policy
where
I
just
decide
to
use
one
path
at
a
time
and
I
have
certain
ways
of
deciding
when
to
when
another
path
is
more
attractive
and
switch
over
to
that
completely
right.
AA
So
I
also
see
but
and
I
think
you're
right
that
you
know
a
migration
type
policy
can
probably
be
done
more
simply
than
a
full-blown
multi
path
policy
and
that's
a
good
intermediate
step
to
have.
But
then
there's
also
going
to
be,
like
you
know,
I,
don't
want
any
policy
at
all.
Like
somebody
else
mentioned,
you
know
kind
of
just
have
the
dead
thing
when
we
don't
want
that
at
all
and
and
so
so
I
guess,
you
know
kind
of
to
summarize
my
point.
AA
I
think
the
most
important
thing
is
is,
from
the
very
start,
be
able
to
distinguish
and
express
that
the
distinction
between
these
levels
of
mekinese
and
policy
very
clearly
from
the
start,
I
think,
is
very
important
for
all
the
necessary
mechanisms
for
migration
and
multipath
together
to
be
in
1.0,
hopefully-
or
you
know,
kind
of
at
least
not
impeded
at
all,
but
maybe
both
the
migration
and
the
multipath
policy
stuff
really
should
be.
You
know,
separated
clean
and
separated
out
extension
or
something
like
that.
That's
just
how
I
see
it
so.
N
Let
me
address
that
very,
very
quickly,
I
what
you
said
there
that
connection
migration
is
simply
a
policy.
That
is
the
the
view
that
you
can
take
when
you
have
multipath.
So
when
you
have
multipath,
you
can
say
that
you
know
connection
migration
is
one
particular
instance
or
one
particular
special
case
of
multipath.
What
what
I'm
trying
to
do
here
is
to
not
go
there.
Basically,
how
can
we
make
connection
migration?
A
special
case
of
single
path
is
what
I'm
trying
to
do.
N
AA
Other
than
so
I
I
see
your
desire,
but
my
big
concern
with
that
line
of
thinking
is
that
you're
gonna
get
almost
all
you
know.
You're
gonna
get
a
large
fraction
of
the
complexity
of
multipath
and
I'm.
Already
seeing
that,
like
you
know,
in
the
draft,
you
know
kind
of
the
migration
part
of
the
draft
is
already
huge.
You
know
there's
a
lot
of
text
there
and
you
know,
as
you
say,
it's
still
very
incomplete,
even
just
to
get
migration.
You
know
that
you
know
especially
the
policy
stuff.
AA
N
An
interesting
distinction
that
he
just
made,
which
is
complexity
in
specification,
was
the
complexity
in
implementation.
To
some
degree.
That's
it's.
We
should
try
to
reduce
complexity
in
specification,
but
we
definitely
do
not
want
complexity
in
implementations
when
you're
doing
multipath,
like
I
said
you
will
want
to
instantiate
multiple
congestion
control.
We
went
to
a
condition
controllers,
multiple
lost
every
context
and
so
on,
and
you
do
not
need
that
of
us.
So
the
question
is
what
exceptions
can
be
makin?
Can
we
live,
but
they're
not
gonna
be
perfect,
but
can
we
do
something?
N
T
Under
a
pop
of
Microsoft,
so
observing
that
the
UDP
base
transport,
for
example,
is
likely
to
work
on
some
interfaces
and
not
on
others,
it
will
it
be
useful.
Would
it
be
more
useful
to
define
migration
such
that
it
works
across
transports?
So
it's
not,
then
a
quick
specific,
but
can,
for
example,
you
know
migrate
from
quick
to
TCP
right,
no.
I
N
I
I
think
I
think
we
could
take
on
a
small
amount
of
work
to
make
some
minor
modifications
the
current
text
and
make
this
possible
without,
like
you
know,
overburdening
like
implementers
and
and
and
that
would
be
totally
sufficient
and
I
would
support
doing
that
now
rather
than
later,
because
you
know
I
basically,
I
think
this
is
like
super
useful,
so
I
mean
there
are
other
alternatives
like
peers.
Can
you
know
how
to
really
negotiate
this
like
mechanism
or
this
policies
like
available?
But
it's
a
bit
of
hassle.
N
B
E
O
E
And
if
X
plus
1
is
reordered
with
respect
to
X
plus
2,
which
is
highly
likely,
the
other
side
is
going
to
treat,
it
is
lost
immediately
upon
receipt
of
X,
plus
2.
So
and
all
of
those
things
I
would
like
to
understand
in
a
lot
more
detail
before
we
do
this,
and
that
is
why
I'm
concerned
about
the
expenditure
of
effort.
So.
A
That's
so
it
running
rapidly
out
at
fine,
yes,
I
booked
the
Butterworth
breakout
room
from
12:30
to
13:30
today
in
case
people
that
want
to
discuss
this
more
need
space.
It
seats,
12
people
so
well
toward
1:30.
You
said
yes
to
what
12:30
to
13:30
13.
A
A
The
invention
chain
agreed,
so
that
was
the
intent
of
kicking
something
like
that
off.
Specifically,
there
are
people
who
have
proposed
multi-part
designs
on
the
list
that
we
have
basically
not
discussed,
because
we
didn't
feel
that
we
have
time
for
it,
but
at
least
in
some
people's
minds.
This
is
very
clear
and
I
want
to
invite
in
or
at
least
of
two
groups
that
have
done
this
already
make
sure
that
they're
in
so
let's
find
out
Jana.
Do
you
want
to
be
on
point
to
get
a
group
together
to
discuss
this
I'd.
A
A
N
A
A
B
N
A
A
A
N
A
N
A
B
So
next
up,
we've
got
a
little
less
than
a
half
an
hour
left
and
next
up,
we've
got
a
discussion
of
the
third
implementation
draft
and
I.
Don't
know
if
we're
gonna
hammer
it
out
all
here,
but
it
would
be
at
least
be
nice
to
start
to
kick
off
that
discussion
and
figure
out
what
we
want
to
target
for
Interop
in
Melbourne.
K
I
mean
I
think
we're
not
done
with
the
first
implementation
draft
in
the
sense
that,
like
we're
like
pretty
far
from
people,
I
mean
having
a
drop
on
all
the
features
that
you
could
finish
it
off
on,
but
I,
don't
think
that's
really
useful
bench
benchmark
for
what
we
should
do.
I
mean
yeah
I
mean
I
mean
there
really
are
two
things
you
can
imagine
doing
right.
K
One
is
saying:
well
we're
gonna
make
a
much
more
filled
out
test
matrix
for
the
from
the
second
implementation
draft
and
just
target
no
new
features,
but
just
fill
that
test
matrix
and
the
other
thing
you
could
do
is
you
say:
well
we're
gonna
target
a
bunch
of
new
features,
but
this
but
the
same
crappy
quality
of
like
implementation,
testing
I,
mean
I,
think
ordinarily
I'm
be
I'm.
K
A
conservative
person
and
I
would
say,
like
let's
pulpitis
matrix
but
I,
think
that,
given
the
conversation
with
yesterday,
I
think
trying
to
get
some
experience
with
the
things
like
any
experience
at
all
with
some
of
the
things
we
haven't
done
yet
would
be
valuable
on
0tt
in
particular,
be
nice,
so
so
I
think
my
sense
is
that
we
should
like
pick
out
a
small,
manageable
set
of
new
things
that
we
think
between
thirteen
mutation
draft
and
say
that
plus
whatever
is
in
you
know
in
oh
wait.
K
Basically
now
we
should
do
Martin
Martin's
got
a
thing
right,
so
this
is
the
second
draft
right,
yeah
yeah
this
is
so
did
you
have
didn't?
We
have
a
sort
of.
Do
me,
have
a
bunch
of
crap
that
didn't
make
it
on
the
island.
It
was
like.
It
was
like
things
that
things
things.
Yes
there
we
go
good.
So
if
I
had
to
like
the
things
that
I
would
think
as
I
say,
I
think
CRT
TV
should
absolutely
on
this
list.
K
What
else
I'm
sorry
to
get
pretty
sad
that
we
don't
have
like
like
no
one's
really
testing
the
gesture
control,
but
I
mean
I
I
feel
like
it's
pretty
easy
I
to
vertical,
like
I,
hear
congestion
control
and
like
get
some
data
across
the
internet,
so
III
guess
I
would
say
either
of
loss,
recovery
or
kaduche
controller.
K
Both
I
mean
basically
BL
demonstrate
that
you
can
get
demonstrate
that
you
get
I
mean
it
feels
differently,
demonstrate
that
you
can
move
concurrently,
a
fairly
large
amount
of
data
under
like
more
or
less
ideal
network
conditions
and
we've
already
seeing
like
scary
stuff,
where
people
try
to
do
this
and
it
doesn't
work
like
I'm,
not
even
sure
I'm
correctly,
like
giving
our
stream
credit,
for
instance,
so
yeah
so
I
think
those
of
you,
two
things
I
would
be
inclined
to
think
about.
There's
still
a
lot
left,
so
even.
I
Google
yeah
I,
agree,
I,
think
zero
to
T
is
is
definitely
analyst,
and
if
we
don't
do
that
now,
that's
kind
of
crazy
and
yeah
I
think
I.
Think
so.
I
think
we
we
don't
need
to
do
all
of
loss
recovery
if
we
don't
want
to.
If
we
just
want
to
like
tack
like
now
fast
retransmit
onto
the
existing,
like
our
tear
stuff
I,
think
that's
totally
sufficient
in
my
opinion
and
then
I
think
congestion
control,
especially
with
new
Reno,
is
so
damn
easy
to
implement.
A
Lot
occurred
so
the
the
goal
for
the
third
above
day
to
drop
this
Melbourne
write
for
the
inner
up,
which
is
not
terribly
far
away
and
there's
the
holiday
period
in
the
middle.
It's
basically
eight
weeks
holidays,
so
oh
wait,
certainly
I,
think,
minibar
and
and
all
that
stuff
I
would
probably
add
something
along
the
lines.
What
echo
said
in
terms
of
make
sure
you
you
know,
do
some
sort
of
do
your
loss
recovery
properly
at
least
I,
think
that
should
be
stable
now
and
and
maybe
something
beyond
just
floor
control?
A
What
would
be
nice
if
you
know
this
strong
interest
of
doing
zero,
RTT,
not
sure
that
I
can
deliver,
but
that's
okay,
it's
still
added.
If
other
people
feel
comfortable
that
they
get
it
done,
I
would
be
hesitant
to
do
much
more
than
that,
buy
it
by
Melbourne.
I
think
it
feels
like
we
implementations
need
to
settle
a
little
bit
longer
on
this
bill.
Some
that
don't
really
have
also
been
baked
yet
and
I
would
leave
it
at
that.
It.
N
Yeah
so
zero,
our
GT
and
lost
korean-canadian
control
seem
like
good
targets.
One
feedback
I
would
have
is
loss,
recovery
and
congestion.
Control
is
not
much
about
Interop,
so
you
know,
defining
very
strict
test.
Cases
would
be
useful
because
the
number
of
test
cases
here
just
explored
because
you
know
reordering
loss,
delay
packet
delays,
there's
like
bunch
of
test
cases.
B
K
So
I
didn't
pay
Praveen
to
say
that
I
wonder
one
of
these
previous
events.
I
spent
some
time
hacking
up
an
interrupt
harness
which,
like
basically
that's
two
patient's
the
same
machine
connect
to
each
other
and
right
now,
for
it
doesn't
any
simulation
for
any
of
this
stuff,
but
it'd
be
easy
to
add.
Anyway,
should
packet
drops
or
our
latency
introduction,
so
I
absolutely
agree
with
you
that
we
should
like
the
script
described
like
like.
Rather
than
saying,
you
must
have
lost
recovery
or
a
secure
data
control.
K
B
N
I
think
that's
good
enough
having
the
requirement
that
you
are
able
to
recover
and
like
send
a
large
payload
with
certain
loss
characteristics,
but
one
request
would
be
that
whatever
kind
of
metal
box
were
thinking
about
here.
Having
that
a
separate
box
versus
Lubeck
would
be
extremely
useful,
because
you
know
our
implementation,
it
would.
G
N
K
K
Yes,
so
that
lets
it
that
lets
you
that
less
you
reaper
once
you
run
into
problems
right,
that
is,
you
have
a
random
loser,
but,
like
he's,
our
weight
affects
the
seed.
I
am
NOT
like
I'm
produce
a
number
of
things,
but
I
think
I'm
not
qualified
to
describe,
but
a
test
case
would
look
like
we've
raised
lost
patterns,
so
maybe
someone
who
really
understands
transport,
like
Ian
Rajanna,
could
write
that
down
another.
D
E
D
E
Mcmanus,
isn't
here:
I'm
gonna
push
a
little
bit
harder
on
on
people,
I
think
we
were
actually
seeing
people
internet
loss
recovery
now,
simply
by
virtue
of
the
fact
that
they
they're
running
their
tests
over
the
internet
and
they're
working
and
congestion
control.
I.
Think
we'll
see
more
of
us
as
times
that
time
goes
by.
I
definitely
want
to
get
zero
ITT
and
resumption
in
we're
not
going
to
get
ahead
of
compression
now,
because
we
simply
haven't
decided
what
it's
going
to
be.
But
I
want
to
be
in
a
position
where
we're
doing.
E
Of
that's
what's
after
this,
yes
yeah
but
I
want
to
get
us
to
a
position
where
we're
ready
to
do
that
as
quickly
as
possible,
because
all
the
other
things
are
out
of
the
way,
and
so
looking
at
this
list,
I
think
things
like
the
path,
MTU
discovery
and
and
key
updates
or
something
that
can
live
a
long
time
without
actually
being
implemented
experience.
Materialist
shows.
O
E
Then
is
lost.
Recovering
Susan
seems
like
a
lot
of
people.
Have
that
already?
Oh
eight
I
think
it's
the
first
one
right.
Well,
that's
a
deep
chocolate
yep!
Let's
do
our
white
mister
loss
recovery!
Okay,
where
awaits
the
baseline,
if
we're
not
doing
our
white
we're
not
talking
to
each
other,
we're
doing,
loss,
recovery
and
then
I
would
say
zero
RCT
after
that,
which
means
that
you'd
have
to
test
your
resumption
first,
which
is
fairly
trivial.
E
Most
of
the
tailless
tax
will
do
that,
for
you
pretty
pretty
naturally,
and
then
I
think
people
should
be
moving
on
to
migration
I.
Don't
think
that
taking
it
explicitly
off
the
table
at
this
point
is
a
good
idea,
because
we
need
that
sort
of
momentum,
and
it's
not
actually
all
that
much
did
we
have
stateless
retry
or
reset
in
this
in
this
list,
I
think
we
had
said
lists
we
that
we
had.
Yes,
we
do.
Okay,
those
probably
need
to
be
slotted
into
the
list
as
well.
Hello,
retro
requests,
those
sorts
of
things.
E
I,
don't
know
that
there's
much
evidence
of
people
actually
doing
these
things.
Yeah
we'll
have
no
detested.
Let's
pop
those
into
the
list,
as
well,
probably
after
lost
recovery
and
0r
TT
and
hello,
retry
have
in
gnarly
interactions,
so
I
would
put
one
or
the
other
I
had
I
put
them
adjacent
to
each
other.
I
missed.
A
Okay,
one
thing
I
want
to
mention
that
so
for
the
second
version
of
this
Martin
put
the
wiki
together
and
sort
of
helped
and
I
would
specifically
want
to
ask
for
somebody
who
isn't
an
editor
to
do
the
same
thing
for
the
third
implementation
draft.
It's
probably
gonna
be
more
work
than
Martin
had
to
do
for
the
second
implementation
draft,
but
because
we
need
more
detail
and
a
bit
more
rigor
now,
but
but
I
don't
want
to
put
this
on
the
editors
because
they
have
other
stuff
to
do
so.
I
Ian
spent
Google
I
I
generally
agree
with.
Actually
we
just
said
what
Martin
just
said.
Sorry,
although
I
would
say
that
I
think
we
should
keep
the
H
to
be
point
nine
option
available
and
not
force
everyone
to
move
to
the
http/2
mapping,
given
its
not
doesn't
have
header
compression
and
it's
a
fair
amount
of
extra
work
compared
to
all
the
other
stuff.
I
would
rather
see
like
we'll
get
like
almost
everything
else
done,
X
for
the
mapping
and
then
and
then
do
the
mapping
later
so
I
don't
want
like
half.
K
A
B
N
Jenna
I
got
I
I
generated
Lee
as
well
that
what
Martin
said
and
I
also
agree
with
what
Ian
said
about
the
HTTP
do
map.
Now
they
should
be
reading
the
point.
9
mapping
I'll
just
say
to
one
minor
point
about
loss
recovery
that
I
think
it's
it'll
be
useful
and
perhaps
important
to
have
losses
that
are
not
just
you
know,
trying
them
in
fact,
because
I
think
we're
not
we're
not
a
single
performance.
B
Y
B
Right
so
we'll
just
we'll
create
a
wiki
page
there,
a
third
implementation
target
you
put
the
general
shape
in
there
and
then,
if
you
can
also
try
and
drive
people
to
contribute
some
form
of
more
specific
tests
and
then
think
about
what
format
that
might
take.
That
would
be
fantastic.
I,
don't
expect
you
to
come
up
with
them
all.
Do.
B
You
very
much
like
you're,
really
appreciate
it.
Okay,
so
that
is
the
third
implementation
draft
we'll
get
that
up
for
discussion
as
soon
as
possible,
so
people
can
start
coding
and
and
working
on
tests.
The
last
item
we
had
on
the
agenda-
and
we
only
have
ten
minutes
left-
is
issue.
Discussion
and
I
think
the
issue
we
want
to
discuss
before
we
leave
the
room
is
a
roadmap
forward
for
header
compression
in
HTTP.
B
I
think
that
the
what
we
want
to
try
and
do
is
to
converge
our
proposal
so
not
have
additional
options
but
move
towards
a
single
option,
and
so
I
think
what
Lars
and
I
need
to
do
is
to
have
a
discussion
with
the
editors
or
the
authors
of
all
those
documents
and
figure
out
how
we
can
move
them
forward
to
a
single
proposal
for
the
working
group.
Any
comments
on
that
and
any
discussion
other
we
want
to
have
right
now,
I
see
a
thumbs
up
that
I'm
transmitting
to
the
mic.
F
Mike
Bishop
Akamai
author
of
one
of
the
drafts
in
question,
so
in
Seattle
we
had
asked
buck
and
me
to
go
and
form
a
design
team
and
for
reasons
which
many
of
you
know,
I
have
had
zero
time
to
do
that,
so
I
am
sorry.
I
have
not
touched
that
my
intent
was
to
try
and
get
some
time
scheduled
for
us
to
spend
that
up
this
coming
week.
Okay,
so,
hopefully
we'll
have
some
answers.
F
There
I
think
we
did
get
at
least
some
reasonable
takeaway
from
the
interim
in
Seattle
of
theirs
willingness
to
move
away
from
each
pack.
If
we
done
hpx
wire
format,
if
we
get
some
benefit
in
return,
but
we're
not
committed
to
that,
there
is
definitely
a
preference
to
move
the
complexity
to
the
encoder
and
away
from
the
decoder
and
keep
it
as
dead,
simple
as
possible.
I
think
right
now
those
are
our
guiding
principles
on
how
we
merge.
E
Yeah
so
Mountain
Thompson
I
observed
that
most
of
the
drafts
in
even
the
new
one
of
similar
in
very
many
critical
ways,
and
they
diverge
on
only
a
few
key
points,
but
the
principles
that
might
just
elucidate
it
should
help
us
I've
heard
people
Express
this
week,
particularly
in
light
of
the
discussion
we
had
earlier
in
a
previous
session
about
schedule
that
perhaps
things
like
delete.
No,
we
discussed
taking
it
off
the
table
before,
but
perhaps
we
might
reconsider
the
option
and
have
the
can
reconsider.
E
B
F
B
If
there's
anything,
we
can
do
to
facilitate
what
you
need
to
do.
Please
ask,
and
hopefully
we'll
have
some
discussions
on
lists.
We're
gonna
reserve
a
chunk
of
time
in
Melbourne
to
do
this.
Okay,
great.
Do
we
have
anything
else
we
can
discuss
in
the
next
six
minutes,
I,
don't
think
so.
So,
let's
call
it
a
day.
Thank
you
all
very
much.