►
From YouTube: IETF101-TCPM-20180319-0930
Description
TCPM meeting session at IETF101
2018/03/19 0930
https://datatracker.ietf.org/meeting/101/proceedings/
B
Okay,
so
my
name
is
Michael
Jackson
I'm,
the
other
Michael
I'm
Josi
co-chairing
of
this
meeting.
This
is
the
note.
Well
you
all
read
when
you
registered
for
this
meeting
and
applies
for
the
ITF
and
applies
to
this
meeting
in
particular,
so
we
do
have
a
note-taker
thanks
Cory.
B
C
B
Here
yet
so,
if
he's,
if
he,
if
he
makes
it
in
time,
you
will
see
a
presentation
about
the
back
in
the
TCP
spec,
we
will
have
an
update
on
TCP,
fast,
open
deployment
and,
if
time
permits,
we
will
look
at
our
the
last
draft
on
this
list,
which
already
had
some
discussion
on
the
mailing
list.
So
working
group
documents
finished
between
the
last
ITF
in
this
one
is
the
RFC
described
in
cubic,
and
these
are
the
working
documents
which
are
active.
The
first
one
I'll
turn
off
back
off
is
actually
in
working
group.
B
Last
call
still,
we
just
figured
out
that
we
don't
have
a
milestone
for
it,
so
we
have
to
restart
everything.
Now,
that's
a
joke,
but
we
really
don't
have
a
milestone
for
it.
We
are
trying
to
fix
this.
Then
we
have
TCP
IDEO.
There
was
an
editorial
update,
but
it's
just
updating,
typos
fixing,
typos
and
that
stuff
Joe
said
they
are
trying
to
get
implementation.
Experience.
B
B
B
E
E
Also
market
auctions
essentially
saying
the
same
thing
about
some
of
the
idea:
references
being
RFC's
already.
So
that's
an
easy
fix,
we'll
fix
that,
and
he
asked
if
some
generic
rules
of
thumb
about
the
better
loss
versus
easy
an
adjustment
will
be
in
order.
Our
answer
to
that
was
that
this
really
depends
on
the
congestion
control.
Our
draft
us
ref
does
recommend
0.8
for
Reno
type
congestion
control
and
for
cubic.
E
There
is
a
bit
of
text
somewhere
already
that
says,
the
results
of
our
tests
indicate
that
cubic
benefits
from
0.85,
but
there's
no
actual
actual
specification
in
our
document
about
this
number
next,
then,
we
got
a
pretty
long
list
of
comments
from
Marco.
Some
of
them
are
pretty
easy
to
deal
with.
First
of
all,
there
was
a
wrong
statement
in
section
4-1
related
to
the
timeout
that
isn't
really
about
our
C
system.
E
I
mean
some
argumentation
on
why
I
use
ecn
to
vary
the
degree
of
back
off,
and
we
decided
that
this
paragraph
can
really
just
be
removed.
So
this
isn't
this
isn't
really
specifying
anything
then.
Secondly,
he
wanted
us
to
specify
what
happens
when
Seawind
is
as
a
thresh,
because
I
document
now
says
this
is
only
for
congestion
avoidance,
but
according
to
I,
receive
five
six,
eight
one,
it's
not
clear
whether
you're
in
congestion
avoidance
or
when
Seawind
is
at
Thresh.
E
So
our
suggestion
is
to
be
conservative
and
confirm
with
the
previous
versions
of
this
document,
which
say
that
you
only
apply
this
when
in
congestion
avoidance
now
this
is
only
definitely
the
case
when
cement
is
bigger
than
ssthresh
and
in
line
with
what
Mark
walls
has
suggested.
We
could
explain
that
there
is
a
gray
area.
E
There
is
a
sentence
in
RFC,
five,
six,
eight
one
talking
about
something
being
in
the
gray
area,
which
says
that
this
may
benefit
from
additional
attention:
experimentation
and
specifications
we'd
like
to
say
that
about
the
case
of
cement
being
as
this
rash,
as
well
as
serum
being
smaller
than
SSO
ash,
because
that
is
also
something
that
is
worth
looking
at.
We
looked
at
it,
but
we
don't
spec
it
next
and
then
there
was
a
concern
about
the
lower
bound
of
two
SMSs
being
introduced
in
in
this
RFC.
Now
that
comes
from
it's
an
editorial
thing.
E
E
We
at
this
point
I
just
want
to
say
that
we
never
intended
to
change
anything
about
the
EC
and
behavior,
except
for
this
calculation
factor.
So
we'll
just
fix
the
text
to
make
it
very
clear
that
we're
not
changing
anything
except
for
this
calculation
factor,
if
see
when
can
be
reduced
below
SS
Rashed,
and
so
be
it
we're
not
we're
not
trying
to
change
anything
about
the
RSC
3168
rules,
except
for
this
factor,
and
that's
it.
F
F
B
Any
other
comments,
so
if
that's
not
the
case,
then
I
sent
up
send
out
a
note
today
that
I'm
closing
the
working
of
last
call
and
ask
your
office
to
submit
in
your
document
and
then
we
will
progress
it
and
decide
it
or
your
changes.
So
we
don't
need
a
another.
Working
group.
Last
call
probably
wait
until
we
have
a
milestone.
G
G
Basically,
if
you
see
more
than
one
marking
in
a
round-trip
time,
you
wouldn't
know
at
the
sender
side,
so
accurate
ecn
is
just
changing
the
feedback
from
the
receiver
to
the
sender
and
providing
you
accurate
information
about
how
much
markings
you
have
seen
in
the
last
round
of
time.
Next
slide:
accurate
Sen
provides
capability,
negotiation,
Inspectorate
compatible
compatible
with
classic
Sen,
and
it
has
two
ways
to
send
feedback.
G
One
is
using
three
bits
in
the
TCP
header,
so
basically
reusing
the
East
End
bits
that
are
already
there
from
the
classic
is
in,
and
also
the
previously
the
the
bit
that
was
probably
known
as
the
nonce
easier
nonce
bit.
That
is
now
deprecated
or
not
in
use
anymore,
and
then
it
has
a
second
way
to
provide
you
even
more
feedback
with
the
TCP
option.
G
Next
slide:
that's
how
the
header
looks
like
so
you
have,
and
you
will
still
have
as
accurate
is
the
end.
We
have
the
two
easy
inflex
called
easy
and
cwr,
and
then
the
flag
that
was
currently
known
as
NS,
the
non
spec
will
in
future
probably
be
the
AE
flag.
The
accurate
is
the
N
flick
and
that's
like
what
you
used
during
the
handshake
and
then
later
on,
if
accurate,
easy
and
was
negotiated,
this
field
is
used
for
these
three
bits
are
used
as
a
field
to
provide
you
a
counter
next
slide.
G
This
is
how
the
option
looks
like
also
very
straightforward.
We
have
like
three
fields
for
for
three
different
counters.
The
difference
is
that
the
counter
provided
in
the
TCP
header
is
a
packet
counter,
and
the
counters
provided
in
the
option
field
are
white
counters.
So
basically,
what
this
also
means
is
that
the
packet
counter
does
include
control
packets,
which
don't
have
any
payload,
and
the
white
counters
don't
include
control
packets
because
they
don't
have
any
payload.
G
G
So
there
is
an
implementation.
This
implementation
provides
the
basic
functionality,
but
it
doesn't
cover
all
the
fallback
and
in
special
cases,
so
it's
kind
of
a
proof
of
concept
implementation.
It's
not
a
full
implementation,
but
it
works.
It's
also
a
little
bit
an
outdated
kernel
version,
I
believe
what
the
implementation
does.
It
uses
the
easy
and
sis
ETL
that's
already
there
and
just
sets
it
to
different
value
to
enable
equities.
Yet
so
the
internet
interface
also
stays
the
same,
and
what
we've
done
so
far
is
that
we
have
this
implemented
was
an
experimentation
option.
G
Sorry,
the
ACE,
a
II
flick
in
the
TCP
header
right
away
with
this
publication
of
this
document
to
be
clear
what
this
flag
is
used
for
and
then
the
process
would
be
with
I
used
a
approval
and
that's
what
the
document
currently
says.
So
basically
the
status
is.
We
got
two
more
reviews
since
the
last
meeting
from
Gauri
and
Myka's.
Thank
you
very
much.
I
try
to
address
most
of
their
comments
put
some
little
changes
in
there.
There
are
a
few
more
things
that
need
more
clarification
and
I.
Think
then
the
document
is
ready.
H
Questions
so
this
is
Michael
speaking
from
the
floor,
so
most
of
my
comments
are
purely
editorial
about
the
wording,
but
there's
one
comment
that
I'd
like
to
raise
again-
and
this
is
about
how
to
end
this
experiment.
If
there's
partial
deployment,
because
I
wonder
about
because
we
experimentally
are
signing
the
flag,
if
we
do
a
PS
spec
that
follows
so
how
do
we
do
that?
If
the
PS
spec
is
different
from
the
experiment,
and
this
comes
down
to
the
negotiation
and
as
far
as
I
can
see,
there's
no
way
to
do
that.
H
So
this
the
negotiation
mechanism
is
not
future-proof.
In
that
sense,
maybe
that's
a
downside
or
any
solution
that
I
could
think
of,
would
burn
a
TC
up,
TCP
option
or
whatever.
So
maybe
it
is
the
right
thing
to
do
it
that
way,
but
I'm
concerned
that
this
mechanism
is
not
future
proof
and
I.
Think
this
at
minimum
must
be
documented
and
recent.
Why
to
discuss
it?
A
of
the
the
header
it's
not
possible
to
decide
Robin
ago.
H
She
ation
mechanism
and
I
think
this
must
be
documented,
because
it's
a
downside
of
the
mechanism
and,
as
I
said,
I,
see
a
risk
as
a
road
reduce
your
risk
that
we
have
to
burn
another
head
a
bit
later
and
we
don't
have
many
of
them.
So
it's
different
to
do
options.
We
don't
have
many
header
bits
and
I
think
at
least
this
must
be
documented,
and
this
is
not
only
an
editorial
change
of
some
words.
It's
something
inherently
about
the
mechanism
and
its
downsides.
Yes,.
G
My
take
is
that
the
things
that
we
flick
as
questions
we
have
for
the
experiment
are
not
things
that
change
the
mechanism.
Basically
in
itself,
it's
things
that
if
you
change
them
in
a
in
the
standards,
Trek
spec,
you
can
just
implement
them
differently
and
it's
no
problem.
No
compatibility,
for
example,
the
question
about
how
often
to
send
the
TCP
option.
There's
no
real
I
know
that
the
this,
the
receiving
end
point
doesn't
have
to
rely
on
a
specific
rate.
G
I
G
H
G
H
J
Michael
can
I
can
I.
Just
ask
you
good
to
know
of
you
before
you
go
away
in
case
I'm,
just
trying
to
understand
what
you
mean
by
burning
a
bit,
and
this
is
Bob
Brisco
for
the
scribe,
because
we're
we're
using
a
bit.
That's
already
been
used.
So
in
a
sense,
we're
we're
not
burning
a
bit
and
we're
also
reusing
it
where
we're
using
it
for
negotiation
and
we're
repurposing
it
during
jegichagi.
H
They're
two
different
aspects:
first,
the
split
was
probably
never
used.
It
was
document.
It
was
a
documentation
for
a
proposal
how
to
use
it
probably
was
never
used.
But,
okay,
we
can
decide
in
this
group.
We
will
use
it
in
future.
That's
not
my
problem.
My
problem
is,
if
you
do
a
PS
spec
of
this,
and
we
have
to
change
the
protocol
as
outcome
of
the
experiment.
You
have
to
burn
another
bit
and
I'm
concerned
about
the
second
bit,
not
about
this
one
here.
This
is
the
second
one
that
I'm
concerned
about,
but.
H
Taking
that
there
would
be
a
way
for
I
mean
I've
not
done
the
design,
but
there
would
be
a
way,
for
example,
to
use
the
bit
and
all
this
in
together
with
an
option
and
then
the
peer
spec.
You
remove
the
option
and
then
you
would
be
able
to
distinguish
whether
you
run
the
peer
speck
of
the
protocol
or
the
experimental
one
so
that
there
would
be
ways
to
solve
it,
but
it
ruins
bits
elsewhere
in
the
header.
So
that's
the
downside.
G
I
did
my
not
change
their
actually
because,
like
we
had
in
mind
that
it's
a
receiver
site
decision
to
use
the
option,
because
if
you
know
you're
an
environment
where
options
don't
use,
then
don't
use
it
right.
So
we
reflect
this
out
a
little
more
explicitly.
That
was
like
a
hidden
assumption.
We
had
in
mind
and
I
added
a
sentence
somewhere
that
hopefully
clarifies
that
part.
It.
J
You
read
my
email
yesterday,
not
really:
okay,
Bob,
let's
go
yeah.
There's
a
statement
in
there
that
the
if
you
like
the
data
send
are
the
one
that's
receiving.
The
ACK
must
be
able
to
muster.
Have
the
implementation
to
be
other
as
an
option,
even
though
it's
not,
it's
only
assured
to
send
an
option.
Yeah.
G
J
So
that
if,
if
one
end
has
implemented
it,
the
other
end
can
use
it
I
think
that's!
Okay
with
you!
Isn't
it
because,
because
I
thought,
your
problem
was
that
you
didn't
want
all
the
complexity
of
having
to
deal
with
sort
of
the
failure
of
the
option
and
that
study
was
sending
it.
But
if
you
do
get
it
through
from
from
the
receiving
end,
I
mean.
G
I
think
it's
it's
it's
correct
to
say
and
I'm
not
sure
what
the
exact
wording
I
think
it's
correct
to
say:
you
have
to
have
the
implementation
to
be
able
to
pass
it.
If
you,
if
you
add
another
kind
of
switch
to
say
it's
implemented
but
I
already
know
I,
don't
need
this
information,
though
you
don't
have
to
pass
it.
It's
like
an
implementation
decision
only
it.
G
F
G
I
mean
like,
what's
here:
what's
your
use
case
here
because,
like
I
understand
that
for
someone
in
data
centers,
now
you
don't
want
the
option,
but
you
have
the
code
in
your
in
your
implementation,
because
you
use
a
standard
implementation
that
hopefully
already
has
the
code
and
you
never
use
the
code.
You
never
run
so
that
code
pass.
Is
that
a
problem
for
you.
F
G
J
Bob
Brisco
again
I,
don't
see
the
problem
with
I
mean,
obviously
it's
work
doing
the
implementation.
But
if,
if
you
cause
an
incoming
option,
you
don't
have
to
have
any
of
the
code
that
does
any
of
the
they
fall
back
because
you're
not
sending
an
options
and
all
effect
for
back
as
if
they
were
sending
it.
No
yeah
yeah.
G
H
B
G
B
L
L
So
doing
time-based
instead
of
counting
dewbacks
next,
please
on
another
sort
of
combined
feature
with
rockets
called
the
tailless
probe
and
the
problem
to
deal
with
this.
That
today,
when
you
have
a
tailless,
you
have
to
wait
for
a
time
out
and
tell
loss
is
occurring,
common
and
especially
short
connections.
L
They
say
you
send
two
packets
post
get
lost,
and
here
your
serial
might
be
100,
but
you
only
have
three
packets
to
package
or
three
packet
to
send
and
often
lost,
and
then
you
would,
you
see,
went
to
one
according
to
the
RC
and
which
is
like
a
big
penalty
because
literally
you,
sensory,
packets
and
the
last
to
get
lost.
So
you
take
a
time
out.
The
tail
loss
idea
is
that,
okay,
on
this
kind
of
occasion,
you
can
just
after
a
round-trip
or
two,
we
transmit
up
like
a
pro
packet.
L
What
we
do
here
is
you
simply:
we
transmit
the
last
one
and
if
the
last
one
is
quickly
act
within
an
RTT,
then
you
said
it's
really
just
like
fast
recovery.
There
is
really
not
that
many
packet
loss,
because
you
know
things
are
sort
of
coming
back.
It's
not
like
the
entire
flight
is
lost
for
a
long
time,
so
you
perform
a
fast
recovery
instead
of
like
a
four
time:
I'll
reset
Seawind
style.
L
L
So
that's
the
basic
idea
of
RAC
and
in
this
IDF
we
have
just
uploaded
the
cert
revision,
which
we
add
a
few
things.
One
is
I
talked
about
a
time,
the
reloading
window.
Basically,
when
you
wait
for
how
long
do
we
will
wait
before
you
declare
a
package
lost,
you
can
just
wait
for
an
RTT.
That's
too
aggressive,
so
you
add
some
cushion
and
that
cushion
what
we
call
is
the
reordering
window
is,
they
are
usually
they
might
be
reloading.
L
I'll
talk
about
details
later
and
in
terms
of
deployment
in
the
Google
server
kernel.
We
have
complete
sort
of.
We
place
all
the
doop,
a
special
approach
with
rack.
So
today,
in
our
server
production
colonel
there
is
rock
TLP
and
the
standard
time
out
mechanism
which
is
required,
but
that's
it
you
don't
see
other
like
a
FAQ
or
sweet
you
back
or
yeah
yeah.
There
are
quite
a
few.
This
is
completely
subsumed
next,
please!
L
So
what
is
this
dynamic,
reordering
window?
So
in
the
previous
draft,
the
reordering
window
is
simply
set
to
a
quarter
of
RTT.
Very,
very
simple
deals
with
most
of
the
losses
reordering,
because
a
lot
of
reordering
that
we
are
seeing
is
just
a
very
small
scale.
Let's
say
you
know,
the
packet
that
you
sent
in
the
next
is
delivered
just
a
little
bit
quicker
than
the
packet
that
you
sent
earlier.
L
So
the
reordering
degree
is
small,
but
there
are
cases
in
when
the
reordering
window
can
get
pretty
large,
especially
on
Wi-Fi
links
where
it's,
the
Wi-Fi
retransmission
that
costs
the
reordering
and
Wi-Fi
link
retransmission
is
highly
dependent
on
the
channel
status
and
in
this
case
rack
will
perform
a
spoof,
Schloss
detection.
You
say:
okay,
this
packet
timer
has
fired
and
this
package
should
be
consider
lost.
So
if
you
use
lost
based
congestion
control,
then
it's
gonna
take
a
hit
on
that,
and
in
this
case
initial
idea
we
have
is
alright.
L
This
precisely
measure
the
reordering
degree,
but
it
turns
out
to
be
really
complex,
because
when
you
want
to
do
that,
you
have
to
remember
the
per
packet
timestamps,
even
when
the
packet
has
been
acknowledged
right.
So,
usually
in
TCP
stack
when
a
package
cannot
you
free
the
the
packet
because
it's
been
delivered
in
this
case,
you
have
to
keep
those
extra
state
simply
to
do
this
precisely
and
we
believe
that's
not
really
worth
the
effort.
L
What
we
do
is
we
look
at
this
option
that
have
been
created
a
long
time
ago
called
the
bouquet
sack
and
what
it
does
is
when
you
receive
a
packet
that
covers
a
sequence
that
you
have
already
received.
You
simply
return.
The
sack
option
says:
hey
I
got
this
duplicated
sequence
and
it
has
been
on
implemented
in
all
the
major
stacks,
like
Linux,
iOS
and
Windows
since
all
on
by
default,
and
that's
a
great
indication
because
it
signals
buoys
which
transmission
Jenna.
You
have
a
question
there.
L
F
L
J
L
L
J
L
Oh
great,
so
the
objective
here
is
to
say:
you
want
to
adjust
this
sort
of
reordering
window,
essentially
the
time
now
for
the
packets
to
accommodate
higher
degree
of
reordering.
So
if
two
packet
being
we
order
more
than
a
quarter
of
minimun
RTT,
then
today
the
previous
version
couldn't
catch
that
and
it
will
cause
a
spools
sort
of
loss,
recovery,
yeah.
J
J
J
K
An
answer
to
that,
and-
and
that
is
if
you
take
your
favorite,
very,
very
complicated
recovery
scenario
and
then
insert
in
the
forward
path-
something
that
shuffles
every
four
packets
does
it
still
work,
and
the
answer
is
no,
because
you
can't
do
the
logic
in
sequence:
space.
You
want
to
do
the
logic
in
time-space
and
so
that
the
sort
of
the
limiting
case
is
allow
every
packet
to
have
an
independent
delay.
K
Okay
and
and
but
so
you're
correct
that
there
isn't
specified
a
reordering
threshold
or
an
implicit
reordering
threshold.
But
you
want
to
be
able
to
design
that
independently.
You
don't
want
the
algorithm
to
have
built
into
it
assumptions
about
how
much
reordering
what
is
the
upper
bound
on
the
amount
of
reordering
of
the
reordering
distance
and
so
making
the
algorithm
support.
Arbitrary
levels
of
reordering
such
as
you
can
then
put
in
a
policy
and
optimization
about
how
much
reordering
and
how
much
spurious
retransmissions
you're
willing
to
deal
with.
J
L
So
the
next
I'll
put
more
detail,
but
basically
the
idea
is
to
accommodate
reordering
up
to
an
RTT
further
than
that
Rach
couldn't
do
it
because
in
the
end
of
the
day,
if
you
have
to
pass
one,
send
it
to
them
over
the
moon
and
one
is
a
local
network.
There
is
no
way
we
can
accommodate.
We
ordering
like
that
yeah.
F
F
L
F
L
So
I
think
I
give
that
you
know
forget
what
I
need
to
modify
this
right.
This
is
about
what
happened
is
that
in
the
TSO
processing,
when
you
say
you
have,
you
know
they
say
package.
Seven,
eight
nine
that
gets
sacked
right
in
linux
they'll
be
collapsed
into
one
sort
of
packet
buffer.
If
you
call
that
yeah,
which
you
will
lose
the
timestamps
of
individuals.
M
L
L
Essentially
you
buy
another
quarter
of
monatti,
it's
important
that
it.
You
don't
increase
it
for
every
D
sack,
because
you
could
get
a
lot
of
D
sack
doing
we
ordering
in
a
roundtrip.
So
we
only
do
that
incremental
E
and
again.
This
could
still
miss
that
right.
Let's
say
the
reordering
degree
is
actually
3/4
of
our
TT.
So
for
the
next
round,
Ferb
you
still
going
to
cause
some
spewed,
we
transmission,
and
then
you
just
have
to
learn
again.
L
So
it
takes
some
round
trips
to
attempt
to
a
level.
That's
I
can
accommodate
the
current
reordering
and
then
we
don't
want
to
keep
this
high
reordering
for
forever,
because
the
problem
is
then
your
timeout
gets
too
long,
and
if
there
is
no
reordering,
you
don't
want
to
delay
your
fast
recovery
that
long.
L
So
what
we
do
is
another
heuristics
to
say
if,
after
16
magic,
number
16
last
recoveries-
and
we
see
all
the
recoveries
are
done
without
seeing
further
defects,
then
we
just
reset
it
and
just
repeat
this
process,
and
all
this
design
is
there's
a
lot
of
ways
to
make
it
more
fancier
more
adaptive.
But
we
just
trying
to
make
it
simple
and
good
enough
in
our
test
cases
and
then
doing
fast
recovery,
we
will
temporarily
we
set
the
reordering
window
to
zero
to
be
very
prompt
in
fast
retransmit.
L
So
the
idea
is
to
be
conservative
in
the
beginning,
but
once
we
decided
okay,
we
need
to
go
into
loss
recovery.
Then
we
are
very
aggressive
in
marking
the
packets.
We
could
have
cost
a
lot
of
spirit
retransmission,
but
with
that
decision,
but
it's
sort
of
a
trade-off
that
we
have
to
make
and
again
all
this
reordering
window
will
be
kept
under
the
smooth
RTT.
So
any
we
are
doing
further
than
that.
We
cannot
catch
that
you
will
cost
buoys
with
transmission,
but
I
would
order.
L
I
would
argue
that
for
any
kind
of
case
that
would
be
always
reordering
there.
It's
you
cannot
do
that
perfectly,
like
Bob
next
like
lease,
so
this
is
just
a
showcase
of
the
two
algorithms.
You
know
on
the
right
on
the
left,
its
the
O
one
and
on
the
right,
the
new
one
where
this
is
not
a
this
is
in
emulation
where
we
deliberately
we
order
packets
to
hell
and
the
sacks
are
in
the
purple
color
you
can
see.
We
are
triggering
a
tongue
of
sacs,
including
the
D
sacs
and
in
the
over
chin.
L
It
will
just
keep
trigger
all
this
false
recovery.
So
you
will
see
the
stupid
is
only
60
mega
bits
per
second,
but
in
the
new
one,
you'll
see
initially
we're
still
now.
Ramping
are
very
good,
but
we
are
learning
and
increasing
the
reloading
window.
Once
the
reloading
window
is
pick
enough
to
accommodate
the
reordering,
then
you
don't
cause
further
spoof
retransmission
and
even
under
severe
we
order
you
can
zoom
up
on
your
speed
very
quickly.
L
So
this
is
just
to
show
that
how
this
dynamic
we
ordering
window
works
under
very
severe
reorder
and
if,
on
the
right
hand,
side
the
new
one.
If
I
look
at
a
longer
time
scale,
you
will
see
after
the
while
that
we
essentially
the
reordering
window,
will
rewind
and
you
will
Constance
police
retransmit,
but
you
will
read
nirn
and
then
pick
up
again
thanks
like
lease.
L
So
the
last
thing
is
the
dupe.
A
special
emulation
mode
do
back.
Special
can
still
be
very
useful,
especially
in
ultra-low
our
titties.
Why
is
that?
Because
in
rack
I
talked
about
a
time
out
for
every
packet
right
and
but
in
say
they
are
sinners.
The
oddity
is
less
than
100
microsecond
a
lot
of
time,
but
your
stack,
timer
tick
might
be
much
bigger
than
that
say:
1
millisecond
or
even
10
millisecond.
So
in
this
case
let's
say
you're
reloading
time
is
reality:
300
microsecond,
the
fasted
timer
you
can
fire
is
say
a
millisecond.
L
L
N
So
this
is
a
rich
after
Canada
is
that
statement
actually
true
I
mean
I
was
under
the
impression
that
in
the
example
with
RC
6
to
675,
you
would
enter
loss
recovery
after
the
sack
of
packet
7,
but
once
you're
you're
in
loss
recovery,
the
entire
point
of
66
75
was
to
recover
all
these
four
packets
right
now.
If.
L
O
Alec
this
is
Josi
from
Mike
and
then
about
the
66
75.
You
are
right.
One
two
pocket
has
been
considered
as
a
roast.
That's
know.
If
there
is
any
window
size,
we
can
send
that
packet
six,
because
now
we
in
order
to
in
a
pro,
but
this
might
be
a
rose
but
we're
not
sure.
But
now
we
can
send
it
just
in
case
and
then
we
can
get
feedback.
L
N
L
L
F
L
F
So
this
seems
I
mean
again
a
little
uncertain
about
I'm,
not
pushing
back
I'm
just
trying
to
understand
what
the
intuition
here
is,
because
this
seems
like
early
transmit
is
getting
folded
in
in
a
strange
way,
because
early
they
transmit
tries
to
do
exactly
that
right.
If
seven
was
the
last
packet
that
was
sent,
then
early
retransmit
would
fire
recovery
of
six.
F
F
L
L
Next
slide,
please
so
the
last
one
is
interacting
with
congestion
control.
There
is
the
case
when,
on
a
single
AK,
when
we
receive
a
will
update
the
RTT
and
Rach
can
trigger
a
lot
of
packet.
That's
considered,
it
lost
the
simplest
of
they
say
you
sent
100
packet
right,
only
the
very
last
one
made
it
and
in
this
case
Rack
will
get
an
update
of
the
RTT
and
for
packet
1
to
99
it's
going
to
arm
a
timer
right
once
the
timer
fired.
L
They
say
after
a
quarter
of
RTT
later
it's
going
to
mark
1
to
99
packet
lost,
so
the
in
fly
will
drop
or
come
from
100
down
to
base
essentially
zero
from
99
to
zero.
So
that's
a
big
change
of
the
in
fly
and
if
you
just
implement
the
current
reno
congestion
control,
which
always
sent
an
fast
recovery
first,
it
reduce
the
Seawind
by
half.
Let's
say:
50
right
so
see
you
in
is
50
or
SS
ratio
is
50
and
the
in
fly
is
essentially
zero.
So
what
do
you
do?
Is
you
burst?
L
50,
packets
out
and
without
pacing?
You
is
very
likely
to
induce
another
round
of
loss
which
you
have
to
spend
to
recover.
So
Linux
doesn't
have
this
problem
because
it
uses
this
proportional
reduction,
which
what
it
does
is
that
during
the
fast
recovery
you
either
do
packet
conservation
for
every
packet
sex.
You
say
one
more
packet
out
or
you
do
slow,
sir,
for
every
packet
sack
you
send
two
packets
out,
so
it
does
have
this
situation.
So
we
will
recommend
using
this
fast
recovery
approach,
proportional
reduction.
L
When
you
implement
rack
and
another
helpful
one
is
you
can
do
TCP,
placing
so
that
you
don't
send
a
big
burst?
That
would
be
the
most
convenient
solution
for
a
lot
of
other
situations
too
next
piece,
so
the
development.
How
frog
has
is
near
the
end.
We
don't
plan
any
further
sort
of
algorithm
changes.
Of
course,
little
tunings
is
always
possible
and
Linux
BSD
and
Windows
all
support
that
and
I
think
the
author's
for
authors
see
the
racks.
L
B
Thank
you
one
clarification
question.
You
say:
Linux
FreeBSD
and
Windows
support
rec,
so
I
understood
that
Linux
supports
version.
3
of
this
document.
E
Mike
Aversa
I
I
may
be
asking
something
very
strange,
because
this
isn't
I'm
a
I,
don't
know
so.
The
thing
is
I've
been
playing
with
a
variant
of
this,
and
that
is
really
not
quite
the
same.
It's
a
bit
more
drastic
in
a
certain
way.
Well,
just
something
that
I
experienced
that
I'm
just
wondering
if
the
same
thing
could
happen
here,
but
I,
probably
not
but
I'm
just
wondering
so
let
me
ask
something:
I
experienced
is
that,
with
the
logic
of
using
time
to
decide
what
has
been
lost
on
what
hasn't
been
lost?
E
Well,
the
were
cases
where
I
ended
up
terminating
recovery
and
I
was
basically
over
and
had
just
reach
and
speeded
everything,
but
I
was
lost.
I
was
left
with
a
large
window
that
I'm
now
able
to
basically
burst
out
immediately
so
I.
What
I
needed
to
create
is
a
phase
of
pacing
that
is
after
recovery.
I,
don't
know.
E
If
that
sort
of
thing
can
happen
to
you,
because
I
think
a
proportional
rate
reduction
would
operate
within
recovery,
so
I
I,
don't
know
if
you
need
to
have
something
at
the
end
of
recovery
where
you,
but
you
can.
You
know,
because
basically
the
a
clocking
always
allows
you
to
send
out
another
packet.
L
So
I
definitely
agree
your
observe
a
shink
as
we
see
the
same
thing,
that's
why
we
recommend
using
FQ
pacing
for
basically
in
general,
don't
just
do
that,
but
for
the
PR
are
this
really
for
just
the
fast
recovery?
But
again
TCP
is
inclined
to
cause
burns
because
of
the
Seawind
in
fly
differences,
and
it's
sort
of
this
is
sort
of
out
of
scope
of
the
loss
recovery.
L
E
E
L
E
E
E
E
L
I
F
I
F
Our
implementation,
we
actually
chose
to
keep
it
for
sequence
number
space,
because,
for
example,
if
you
do
like
the
TS
or
LS
o,
then
it's
a
group
of
packets
that
are
transmitted
at
the
pretty
much
the
same
time.
So
there's
some
implementation.
Efficiency
gained
by
tracking
this
per
sequence
base,
rather
than
per
packet,
which
also
means
that
we
can't
track
the
original
packet
and
the
retransmission
separately
so,
and
the
other
thing
I
found
is
that
there
is
one
case
that
you
talked
about
tail
drop,
which
will
not
be
triggered.
F
L
Think
the
trap
already
that's
exactly
why
we
put
t.o.p
there,
because
in
the
end,
rack
recent
acts
still
requires
some
ACK
right
and
so
for
tail
drops.
Where
you
don't
get
any
ACK.
There
is
just
nothing
you
can
do
in
using
these
time
stamps
you.
You
have
to
send
something
to
trigger
an
ACK
to
sort
of
cost
more
recovery
actions.
So
it
doesn't
matter
how
you
put
the
timestamp
in
in
a
sequence
base
or
in
the
packet
boundaries
yeah.
Okay,
another.
F
F
L
F
But
my
question
is
like
the
safety
property
would
be
just
obtained
by
doing
the
the
correct
inflation
of
the
congestion
window
right.
The
pacing
is
an
optional
part,
because
today,
for
example,
without
pacing,
like
you
know,
the
conventional
loss
recovery
yeah,
similarly
inflates
the
condition
window
so
that,
as
long
as
we
guarantee
that
safety
property,
then
the
other
portions
of
PR
are
not
really
needed
for
track.
L
B
Okay,
so
I
would
say,
give
the
implementers
a
bit
of
time
to
catch
up
with
version
o3,
okay,
so
hoping
that
whoever
has
implemented
version
over
to
has
interest
in
version.
Oh
three
can
report
whether
it
works,
whether
it's
implementable
in
a
good
way
or
if
they
have
any
comments,
so
get
some
feedback
before
starting
a
working
group
last
go
on
that
sure.
J
Bob
Brisco
I'm
wondering
whether
this
draft,
whether
what
you're
trying
to
do
is
standardize
the
algorithm
you
have
thought
of
or
whether
you're
trying
to
standardize
allow
people
to
use
different
algorithms
and
you
you
need
to
describe
what
you
were
trying
to
do
as
a
sort
of
black
box.
Do
you
see
what
I
mean
because
it's
the
former
Saints
right
yeah?
So
it's
it's
documenting
one
algorithm.
That's.
J
Okay,
that
you,
you
might
want
to
say
that
and
and
I
think
the
draft
would
benefit
from
having
what
the
algorithm
is
trying
to
do,
not
just
what
the
algorithm
is.
Okay,
can
you
repeat
that
again
what
the
algorithm
is
trying
to
do,
not
just
what
the
algorithm
is?
In
other
words,
what
are
the
objectives.
J
Warning
just
you
you've,
just
you've
just
made
it
like
reading
a
user
manual
about
something
that
says
this
button.
Does
this
this
bus
and
as
this
this
button
does
this
but
says
what
you're
trying
to
the
machine
is
trying
to
do
as
a
whole.
J
J
F
You
to
be
quick,
pravin,
Microsoft,
sorry,
one
more
question,
so
you
said
the
implementation
is
almost
your
implementation
is
replaced
all
over
the
other
loss
recovery
with
RAC.
Is
it
a
goal
for
the
draft
or
the
RFC
to
say
that
an
implementation
should
do
that,
or
are
you
going
to
leave
that
open
for
implementations
to
do
both
if
they
chose
to
do
so
to
do
both
yeah.
F
L
F
Jenai,
a
very
quick
suggestion
that
this
is
a
working
group
document,
so
I
think
you
should
take
input
from
people
who
have
specific
suggestions
about
motivating
text
and
so
on.
Specifically
speaking
to
Bob
Bob,
you
write
very
well.
I
didn't
be
wonderful
to
actually
have
what
you're
suggesting,
but
I
would
I
would
also
say
you
know,
such
as
text
yeah.
B
C
Okay,
so
so
this
presentation
is
both
the
first
presentation
of
the
contractor
draft
in
front
of
TCP
M,
because
it
was
already
presented
in
mp
TCP
in
Prague
last
year,
but
we
did
not
have
the
opportunity
to
present
in
TCP
I'm
in
Singapore
and
then
an
update
of
the
draft
after
the
working
group.
Acceptance
on
next
slide
please.
So
the
initial
motivation
from
the
convertor
comes
from
the
work
that
has
been
done
in
MP,
TCP
working
group
and
in
MP
DCP.
C
We
see
that
there
are
more
and
peaty
speak
Lions
then
MPT
MPT
CB
servers
and
there
is
a
benefit
of
using
MP
TCP
in
the
access
network
to
be
able
to
combine
different
paths,
even
if
the
server
does
not
support
MP
TCP,
so
that
you
can
go
to
a
controller,
that's
reports
and
P
TCP,
so
that
you
can
benefit
from
the
two
paths
in
the
access
network.
Even
if
the
converter
has
not
yet
been
upgraded
to
support.
Mp
TCP
next
slide,
please.
C
So
what
are
the
objectives
of
the
TCP
converter,
which
has
now
been
accepted
by
the
working
group,
is
to
aid
the
development,
the
deployment
of
new
TCP
extensions?
And
if
you
look
at
the
history
of
the
different
TCP
extensions,
we've
seen
that
extensions
are
first
deployed
on
the
clients
and
then
they
are
deployed
much
later
on
the
server
side
and
in
enterprise
networks
and
service
provider
networks
its.
C
There
is
a
possibility
of
deploying
converters
to
aid
the
deployment
of
some
TCP
extensions,
so
the
TCP
converters
they
act
like
proxies
and
they
will
proxy
connection
initiated
by
clients
and
the
objective
of
the
control
to
draft
is
to
do
this
proxy
without
requiring
additional
entities.
And
one
important
point
of
the
converter
Draft
compared
to
other
solutions,
is
that
the
converter
has
the
ability
to
inform
the
client
of
the
options
that
are
supported
by
the
server
so
that
the
client
can
detect.
C
C
If
you
want
to
do
NP
TCP
in
the
access
network
from
a
smart
phone
to
a
server
that
does
not
support
an
P
TCP,
you
just
use
n
P
TCP
to
the
convertor
the
tax
as
a
TCP
proxy,
and
then
you
have
a
regular
TCP
connection
to
fine
observer
next
slide,
the
next
slide
and
next
again.
So
what
are
the
basic
principles
for
the
converter?
So
it's
an
explicit
TCP
proxy
between
the
client
and
the
final
server.
So
the
client
knows
the
IP
address
of
the
TCP
proxy.
C
C
The
comments
of
the
proxy
in
the
syn
and
during
the
initial
and
shake
and
the
comments
and
response
are
encoded
in
TLB
format,
to
simplify
the
parsing
and
the
processing
of
the
options,
and
there
is
a
way
for
the
converter
to
inform
the
client
of
the
TCP
option
that
are
supported
by
the
server
to
enable
it
to
bypass
the
converter.
If
the
server
supports
the
required
extensions
next
slide
so
now,
I
have
a
set
of
examples
to
show
you
in
principle:
oh,
it
works
and
the
shim
add
in
all
the
figures
you
will
see.
C
There
are
three
colors.
The
green
color
corresponds
to
the
IP
addresses.
The
blue.
Colors
corresponds
to
the
TCP
header,
including
the
TCP
options,
and
the
red
color
is
the
information
which
is
added
by
the
converter
protocol,
and
this
is
part
of
the
byte
stream
and
it
is
encoded
as
a
set
of
tlvs.
So
let's
do
an
example.
C
So
the
client
wants
to
read
your
server
to
be
able
to
read
your
server
where
the
client
will
do
is
that
it
will
send
a
syn
with
the
TF
or
p
on
to
the
converter,
and
we
use
the
TF
option
to
be
able
to
put
data
inside
the
scene
and
the
data
that
we
put
inside
the
scene
is
the
IP
address
and
the
port
number
of
the
final
server.
So
the
convertor
received
the
syn
and
thanks
to
the
TF
o
cookie
that
it
has
provided
to
the
client.
C
It
confirmed
that
the
sin
is
legitimate
and
what
it
will
do
is
that
it
will
initiate
a
connection
next
lightly
to
other
server,
and
it
knows
the
IP
address
of
the
server
from
the
connect
TLD,
which
was
part
of
the
original
scene.
Then
the
final
server
will
reply
to
the
scene
with
a
cynic.
Next
slide.
Please,
and
so
the
connection
from
the
converter
to
the
server
is
now
established
and
the
converter
next
slide.
Please
will
confirm
with
a
cynic
that
the
connection
to
the
client
has
been
established.
C
Possibility
to
add
the
new
TLV,
where
you
put
a
dns
name
instead
of
an
IP
address
in
the
current
version.
It's
only
an
IP
address,
because
we
want
to
be
fast
on
the
conservative
side
and
next
slide.
So,
as
I
said,
one
of
the
benefit
of
the
converter
is
that
you
can
detect
whether
the
final
destination
supports
are
give
an
option.
C
So
let
me
take
an
example
with
MP
TCP,
but
it
would
work
with
other
TCP
option,
so
the
client
creates
a
connection
request
through
the
converter,
with
the,
and
here
we
are
using
an
T
capable
option
which
is
shown
as
NPC,
and
we
are
using
the
RFC
68
when
T
for
bit
so
just
MP
capable
without
the
key
to
the
converter.
Next
slide.
The
converter
will
try
to
establish
an
MP
TCP
connection
to
the
server
the
server
is
MP
TCP
enabled
so
next
slide.
C
Next
slide,
please
to
the
client
so
that
the
client,
by
just
passing
the
TLV,
which
is
part
of
the
byte
stream
of
the
TCP
connection
from
the
converter
to
the
client,
will
know
that
the
server
supports
and
P
TCP
and
knowing
that
the
server
supports
and
P
TCP.
Then
the
client
for
the
next
connection
can
decide
to
bypass
the
control
and
go
directly
to
the
server.
C
So
that's
a
generic
way
of
enabling
the
clients
to
understand
what
are
the
options
that
are
supported
by
a
given
server,
and
it
means
that
you
can
bypass
the
converter
for
this
specific
destination
and
next
slide.
Please
so
to
be
able
to
use
TFO
from
the
client
to
the
converter.
You
need
to
be
able
to
know
what
is
the
TFO
cookie,
which
belongs
to
the
converter,
and
for
that
there
is
a
bootstrap
connection.
C
So
when
the
client
starts,
it
has
to
initiate
a
connection
to
the
converter
just
and
the
scene
with
an
empty
TFO
and
a
bootstrap
TLV,
and
the
converter
will
reply
to
the
client
next
slide
to
indicate
what
is
the
TFO
cookie,
that
the
client
has
to
use
to
reach
the
converter
and
what
are
the
supported,
tcp
extensions,
that
the
client
supports
and
allows
to
do
a
conversion.
So
that's
a
way
to
learn
the
capabilities
of
the
converter
during
the
bootstrap
procedure.
Next
slide.
C
So
now
we
know
the
converter
cookie
and
we
can
use
it
in
the
client
to
reach
a
converter.
So
next
slide
yeah
next
slide
again.
So
what
is
the
tricky
part
is
how
can
you
do
TFO
to
reach
a
server
while
you
are
already
using
TFO
to
reach
a
converter,
and
the
idea
is
that
we
have
done
the
bootstrap
since
we
have
done
the
bootstrap.
We
know
the
cookie
of
the
client
of
the
converter,
so
we
use
that
in
the
TCP
options.
C
That's
the
blue
color,
so
we
use
the
cookie
that
we
receive
from
the
converter
and
we
want
to
obtain
the
cookie
from
the
server
so
inside
the
connect
TLV
that
we
put
in
the
byte
stream
in
the
establishment
of
the
connection.
We
have
a
specific
TLV
that
indicates
that
we
want
to
send
the
TFO
option
and
apt
in
an
empty
TFO
option
to
the
server.
C
So
when
the
converter
creates
the
connection
to
the
to
the
server,
what
it
will
do
is
that
it
will
send
a
syn
with
the
antitype
MTTF
or
option
that
comes
from
the
connect
tlg
of
the
original
sin
of
the
client.
So
the
server
now
knows
that
the
converter
is
trying
to
create
a
connection
and
request
a
TFO
cookie.
The
server
will
reply
with
the
TFO
cookie,
so
this
is
the
cookie
is
assigned
by
the
server.
C
So
this
TFO,
cookie
assigned
by
the
server
is
returned
to
the
converter,
and
what
the
converter
will
do
is
that,
as
with
the
MPT
CP
example,
we
will
just
take
the
TCP
options
that
have
been
returned
by
the
server
and
this
TCP
option
that
have
been
returned
by
the
server.
We
put
them
as
TLV
inside
the
synth
music,
which
is
returned
by
the
converter
to
the
client
and
the
client
can
pass
the
TFO
option,
which
is
which
is
included
in
the
byte
stream
and
from
knowing
the
TSO
option,
which
is
included
in
the
byte
stream.
C
It
can
detect
what
is
the
cookie
so
SC
that
has
been
provided
by
the
server
and
the
client
can
cache
this
cookie
for
the
specific
server.
Now
on
the
next
slide,
we
will
see
how
the
client
can
open
a
TFO
connection
to
the
server
so
now
that
what
it
will
do
is
that
it
will
go
to
the
converter,
use
the
connect
TLV
specifying
the
tcp
open
option,
part
the
TF
o,
which
has
been
supplied
by
the
server.
C
So
this
TF
o
is
copied
by
the
converter
in
the
scene
that
it
sent
to
the
to
the
final
server
next
slide.
So
you
see
the
TF
o,
which
has
been
allocated
by
the
server.
The
server
recognize
the
cookie
that
is
provided
to
the
IP
address,
which
is
TD
IP
address
of
the
converter,
and
it
accepts
the
data
and
then
you
can
do
the
syn
ACK
and
the
converter
does
the
cynic
as
usual,
so
the
solution
works
with
TF
o
as
well,
just
by
using
the
information
in
the
connect
yield.
So
next
slide
please.
C
So
this
was
the
the
introduction
of
the
converter,
and
this
is
basically
what
I
have
shown
in
Prague
last
year.
Now
I
will
try
to
describe
what
we
have
done
since
the
working
group
at
AG
adoption,
which
was
three
or
four
weeks
ago.
So
we
did
lots
of
small
little
changes
to
try
to
clarify
and
simplify
the
text,
and
we
have
looked
at
out
other
TCP
extensions
than
TFO
and
NP
tcp
could
be
supported
by
the
converter.
So
next
slide,
please
so
first
we
look
at
the.
C
We
would
say
the
base
tcp
options,
so
the
options
that
are
part
of
RFC
793
so
end
of
option
list
no
operation
and
maximum
segment
size.
For
these
options
we
don't
see
any
of
doing
the
conversion
of
these
options
on
the
converter.
So
what
we
propose
is
that
the
converter
does
not
provide
the
conversion
services
for
these
kind
of
options.
C
Next
slide,
then
there
is
window
scale,
so
the
window
scale
option
is
really
a
property
of
the
tcp
implementation
and
in
the
converter
we
have
a
TCP
connection
from
the
client
to
the
converter
and
another
TCP
connection
from
the
converter
to
the
final
server.
It
makes
sense
for
the
client
to
request
the
server
to
you,
the
the
converter,
to
use
Windows
scale,
but
we
don't
believe
that
it
makes
sense
for
the
client
to
request
a
specific
window
scaling
value
for
the
converter.
C
K
Some
hazards
in
the
window
scale
option
that
you
can't
avoid
retracting
the
window
if
you're
close
to
the
end
close
to
the
end
of
the
window
space
and
the
rounding
is
such
that
you
cannot
specify
the
boundaries
at
the
actual
window,
the
actual
window.
You
want
to
announce,
I'll,
take
it
offline,
okay,
there's
there's
some
deep
traps
lurking
here.
C
So
next
slide,
then
we
have
time
stamp
selective
acknowledgements
and
we'll
keep
a
CCP.
We
believe
that
for
these
options
they
can
be
supported
by
the
converter.
So
this
is
true
for
timestamp,
for
sec
permitted
and
for
multipath
CCP
and,
of
course,
of
kind
5,
the
Selective
ik
option.
It
cannot
be
advertised
in
a
scene,
so
it
doesn't
make
sense
to
support
it
on
on
the
converter.
Next
slide,
then
fast
open,
so
I
explain
how
we
can
support
fast
open.
So
this
is
a
part
of
the
design
of
the
converter
draft
next
slide.
C
Tcp
use
a
timeout,
so
there
is
an
option
which
is
related
to
TCP
use
a
timeout
kind
28.
But
it's
unclear
to
us
whether
this
option
is
already
has
really
been
implemented
or
not.
So
we
know
that
the
socket
option
is
implemented,
but
we
don't
know
whether
the
TCP
option
is
implement.
So
is
someone
aware
of
an
implementation
of
the
TCP
option
for
the
TCP
user
timeout?
If
you
are
well,
let
me
know,
and
then
we
can
think
on
how
to
support
it
and
whether
it
would
be
useful
to
support
it
next
slide.
C
Then
there
is
a
TCP
authentication
option.
Well,
the
objective
of
this
option
is
to
be
able
to
detect
modifications
to
the
TCP
header
and
the
TCP
payload.
This
option
is
in
principle,
against
a
proxy
in
the
middle,
so
it
does
not
make
sense
for
a
converter
to
be
able
to
support
TCP
authentication
option.
Then
the
question
is
whether
there
is
a
benefit
in
trying
to
support
the
in
the
NAT
extension
of
the
TCP
authentication
option
and
we'd
be
interested
in
having
feedback
from
the
working
group.
On
on
that.
C
This
is
the
list
which
is
registered
by
the
Ayana
in
the
current
graph.
We
do
not
consider
these
experimental,
TCP
extensions,
and
our
suggestion
is
that
those
extensions
should
be
described
in
a
separate
draft.
Otherwise,
we
will
have
a
draft
that
will
need
to
be
updated
every
time
there
is
a
new
experimental,
TCP
extension
next
slide.
C
So
to
conclude,
we
started
from
a
proposal
that
was
focused
on
multiple
CCP
and
TFO,
because
there
is
a
clear
demand
to
be
able
to
support
converters
for
multiple
CCP,
but
the
discussions
within
the
ITF
convinced
that
there
are
other
use
case
for
other
tcp
extension.
So
in
the
draft
we
try
to
take
into
account
all
the
comments
that
we
have
raised:
doing
the
email
discussions
on
the
NP
tcp
and
the
tcp
and
mailing
list.
So
we
have
an
application
level
protocol.
C
We
still
need
to
have
a
service
name
or
code
to
be
reserved
by
Jana,
so
we
use
zero
ATT
through
TFO
and
the
client
can
bypass
the
converter.
If
service
supports
the
extension
so
on,
next
steps
will
be
to
improve
and
fine-tune
the
discussion
of
the
other
tcp
extensions
based
on
the
feedback
from
the
working
group,
and
we
will
get
back
from
implementers
and
do
interoperate
tests.
As
far
as
I
know,
there
are
three
implementations
that
are
being
developed
for
this
solution.
C
So
what
happens
when
you
do
TLS
from
the
client
to
the
server?
So
you
have.
You
have
to
rely
on
the
certificates
that
are
provided
by
the
server,
and
so
the
server
will
authenticate
the
connection,
and
so
you
have
so
you
it
since
you
encrypt
everything
and
you
authenticate
everything
passing
through
a
proxy
or
passing
through
a
router
doesn't
change
anything
because
the
TLS
is
working
at
the
at
the
byte
stream
layer
and
not
at
the
TCP
header.
So
the
only
issue
are
the
IP
addresses
and.
C
L
Tone
chain,
and
still
find
hard
to
convince
myself
about
the
motivation
seems
like
the
motivation
slide
is
to
support
multi
pass
inside
the
access
network.
Yes,
but
the
most
useful,
MP
TCP
feature
a
usage.
I've
seen
is
to
do
the
pockets
of
the
parking
lot
problem
where
you
launch
the
use,
MP
TCP
across
both
Wi-Fi
and
LTE
interface,
on
on
your
phone,
our
Saviour
interface.
It
doesn't
seem
to
match
this
case
because
then,
where
do
you
put
the
converter?
It
has
to
be
very
close
to
the
server.
L
C
Just
one
motivation:
for
example:
if
you
look
at
what
has
been
explained
in
Korea
about
the
deployment
and
the
usage
of
the
Sox
proxies
in
Korea,
2x2,
Bond,
Wi-Fi
and
LTE
together,
there
are
the
use
case
is
to
put
the
proxy
in
the
network
operators
itself.
So
you
have
a
network
operator
that
provides
Wi-Fi
and
LTE
services
and
the
proxy
is
inside
the
operators.
Network.
L
Okay,
so
essentially
the
the
access
Necker
is
doing
both
Wi-Fi
and
cellular
service.
Okay,
but
that
sounds
is
I,
feel
it's
very
limited
at
least
doesn't
apply
to
the
case
in
u.s.,
of
course,
maybe
in
Europe,
but
I
don't
know
how
many
eyes
P
are
providing
both
services
together,
like
they
have
a
large
user.
So
how.
L
C
If
you
are
an
incumbent
operator
typically,
when
you
have,
you
will
have
mobile
services
and
then,
if
you
have
dsl,
then
you
have
Wi-Fi
access
point
on
your
DSL
Network
and
many
networks
are
providing
solutions
such
as
fun
or
others
to
share
the
Wi-Fi
access
point
from
multiple
users.
So
this
is
pretty
common.
Okay,.
L
C
L
A
State,
your
name
Jonathan
modie,
sorry
I
violate
the
height
requirement.
Evidently,
I
had
two
comments.
My
first
comment
is
that
I
found
some
tension
in
the
draft
between
this
idea
that
the
client
should
have
should
be
able
to
specify
exactly
which
extensions
it
wants
to
use,
and
therefore
the
purpose
of
this
is
to
enable
them
to
use
those
extensions
when
they
want
to
at
least
up
to
the
convertor
that
supports
them
versus
the
what
seemed
like
much
less
control.
A
The
client
has
over
what
the
server,
though
the
converter
talks
to
the
server
it
seems
like
there
was
a
lot
of
things
where
the
draft
says
well,
the
client
can
say
client
can
request
something,
but
the
actual
the
actual
values
are
dependent
upon
what
the
converters
stack
supports
and
it
struck
me
that
there
was
a
real
dichotomy
between
the
fine-grained
control.
You
want
to
give
the
clients
and
the
lack
of
control
they
have
over
that
other
TCP
session.
A
C
But
this
is
something
that
we
plan
to
do,
but
the
idea
is
that
we
want
to
let
the
converter
specify
I
would
say:
I
support
this
TCP
extension.
Let's
say:
I
support,
MP,
TCP
and
sac
and
I'm
able
to
do
the
conversion
services
for
MP,
TCP
and
sac
and
then
for
timestamp
same
time.
I'm
implementing,
73,
23
I
will
do
timestamp
for
all
the
TCP
connections.
That
I
will
open
to
the
final
nation
anyway,
and
so
this
is
not
a
conversion
service.
So
there
is
a.
C
A
My
second
comment
is
that
I
think
the
draft
really
needs
to
explain
why
or
how
you're
going
to
avoid
all
of
the
normal
pitfalls
that
we
see
with
middleboxes
I
mean
you're,
essentially
adding
a
middle
box
here.
There's
all
sorts
of
inherent
concerns
with
that
and
I
think
that
you,
your
draft,
really
needs
to
explain
how
you're
going
to
avoid
all
of
those.
C
So
so
we
are
aware
of
the
issues
with
middle
boxes,
because
they
they
played
a
key
role
in
the
design
of
MP
TCP,
and
we
had
to
pass
through
all
of
them
with
an
P
TCP
and
we
managed
to
patch
all
of
them
in
with
an
pcp
I.
Think
the
difference
between
the
existing
middle
boxes
and
the
converter
is
that
the
converter
is
explicit.
C
So
you
know
what
the
converter
will
be
doing
doing
to
your
TCP
connection,
and
this
is
a
major
change
compared
to
the
middle
boxes,
and
then
we
can
add
a
section
looking
at
if
we
have
on
the
pass,
a
middle
box
that
would
remove
TCP
option.
Here
is
what
will
happen.
We
can
have
a
section
saying
if
we
have
on
the
pass,
a
middle
box
that
removes
information
in
the
scene
payload.
This
is
what
will
happen
and
and
so
on.
A
C
Well,
we
can
extend
the
middle
book
section,
so
there
are
two
parts
of
the
middle
book
section.
The
part
between
the
client
and
the
convertor
I
think
this
is
the
one
where
we
should
have
the
focus,
and
then
there
is
the
part
of
middleboxes
that
would
sit
between
the
convertor
and
the
final
destination,
but
this
is
part
of
the
existing
TCP
extensions,
for
that,
so
I
would
focus
on
the
client,
converter
path
and
the
presence
of
Mir
boxes
on
those
paths.
C
A
A
Now
it's
not
a
transparent
middle
box,
but
it's
still
a
middle
box
and
there's
still
considerations
that
make
us
typically
not
like
middle
boxes
being
there,
and
you
need
to
make
sure
that
you
address
those
as
part
of
your
draft
and
say
how
the
converter
is
not
going
to
cause
those
problems
or
how
it's
going
to
work
around
the
the
problems
that
are
inherent
with
little
boxes.
Yeah.
G
So
if
I
have
to
propose
right
in
my
head,
then
the
big
difference
is
that
you're
actually
connecting
to
the
converter,
so
the
destination
IP
address
is
the
destination
IP
address
to
the
converter.
You
open
an
enter
and
TCP
connection
to
the
converter,
which
is
really
not
the
same
as
a
middle
box.
So
a
lot
of
those
considerations
we're
worried
about
really.
I
F
Pravin
Microsoft,
so
I
I
in
your
slides.
You
showed
the
success
case
where
you
know
connection
was
successfully
established.
What
about
the
error
cases
where
you
know
the
server
sends
a
reset
back
or
it
might
be
doing
like
DDoS
protection
it
might
be
trying
to
validate
the
client's
source
address
ownership.
How
would
all
those
things
work
so.
C
F
C
No,
so
we
have
two
TCP
connections,
so
there
is
one
between
the
client
and
the
converter
and
one
between
the
converter
and
the
server.
And
so,
if
the
the
connection,
which
is
attempted
by
the
converter,
to
reach
the
clock,
the
server
is
refused
by
the
server.
Then
the
converter
will
refuse
the
connection
to
the
client.
F
F
F
C
F
C
So
on
this
slide
there
is
one
IP
address
for
the
converter,
one
for
the
client
for
the
server,
but
the
deployment
could
use
a
block
of
IP
addresses
on
the
converter
and
have
one
IP
address,
which
is
directly
mapped
to
each
client.
So
it
doesn't
mean
that
all
the
connections
will
go
through
this.
True,
a
single
IP
address
from
the
converter,
so
the
converter
could
have
a
block
of
IP
addresses
here.
Q
Hey
Jacqueline,
my
concerns
are
similar
to
chains.
I.
Think
the
I
had
the
the
draft
says.
No,
the
configuring.
The
list
of
converters
in
the
clients
is
out
of
scope,
I
understand,
but
are
there
any
configuration
strategies
that
are
known
to
work
like
if
you
have,
for
example,
when
the
Wi-Fi
is
in
one
network
and
the
LTE
is
and
another,
then
it?
You
know
the
draft
talks
about
the
routing
would
be
configured
such
that
only
certain
client
IPS
would
reach
the
converter
right.
C
C
If
some
of
the
paths
do
not
reach
the
converter,
then
you
are
trying
to
open
a
TCP
connection
and
you
can
apply
something
like
IP,
eyeballs
or
whatever
to
detect,
which
path
is
able
to
reach
a
converter,
because
the
converter
is
an
IP
address,
and
so
you
can
test
whether
a
specific
path
will
reach
the
given
converter.
Okay,.
C
Bootstrap,
which
allows
a
client
to
try
to
open
a
connection
to
the
server.
If
it
has
multiple
paths,
then
you
can
try
to
do
bootstrap
in
parallel
over
the
different
paths
and
to
see
which
path,
which
is
the
server
that
it
wants
to
interact
that
it
wants
to
interact
with,
and
once
it
has
created
the
would
shrug
to
this
converter.
Then
it
knows
that
this
pass
is
working
to
reach
the
converter.
Okay,.
Q
Q
Q
C
G
Of
the
same
and
set
me
a
cool
event
by
the
way,
and
so
you
have
an
IP
address,
so
if
your
your
passes,
I
connect
it
to
the
Internet,
you
should
be
able
to
reach
the
IP
address.
It's
not
it's
not
like
that.
The
the
converters
somewhere
on
the
path
that
you
have
to
bypass
you
actually
explicitly
send
your
traffic
to
the
converter.
Q
G
M
My
name
is
Tim
Shepherd,
as
people
have
been
asking
more
questions,
I
get
more
confused
about
what
exactly
this
is
and
I
think
perhaps
there's
another
part
of
the
story,
and
that
you
also
need
a
converter
that
is
inside
the
client
operating
system
somewhere.
I,
don't
know
whether
it's
in
the
kernel
or
in
some
library
but
I
was
trying
to
imagine.
Does
the
app
have
to
be
reworked
so
that
it
knows
to
come
to
connect
to
this
converter
instead
of
connecting
directly
to
the
server.
C
M
C
M
C
Looking
at
the
TLS
case,
so
the
the
TLS
case,
when
you
create
a
TLS
session,
you
know
the
IP
address
of
the
final
server.
In
this
case,
you
would
have
to
change
a
bit
the
TLS
implementation
on
the
client
to
say
that
we
reach
this
final
server,
which
is
part
of
the
connect
tlg
via
an
explicit
proxy,
and
it
would
be
some
I
guess,
I'm
speculating.
M
M
R
Beyond
midst
of
Apple
other
question
in
regards
to
the
TFO
converter
functionality
here,
I
would
imagine
that,
in
a
scenario
where
the
access
provider
is
deploying
converters
large-scale,
there
will
be
many
many
different
converters
sort
of
like
a
load
balance
scenario.
Maybe
and
I
could
imagine
that
the
outgoing
appeared
rest
from
the
converter
to
the
actual
server
might
be
changing,
in
which
case
you
might
not
get
much
benefit
out
of
converting
TFO,
because
that
relay
cookie
won't
be
valid
anymore.
It.
C
Depends
on
are,
you
will
manage
the
IP
addresses
on
the
converter,
but
if
you
look
at
IP
v6,
for
example,
with
ipv6,
you
could
have
a
deterministic
way
of
mapping
the
client
address
onto
a
converter,
address
and
I
think
this
solves
your
issue.
And
the
other
point
is
that,
with
the
bootstrap
procedure,
the
client
is
able
to
connect
to
one
specific
converter,
and
so
there
is
some
stickiness
in
the
relationship
between
the
client
and
the
converter.
R
C
But
then,
when
you
go
to
maintenance,
then
you
will
detect
that
you
are
not
to
the
same
converter,
because
your
cookie
to
the
converter
will
be
refused
by
the
converter.
So
you
will
know
that
all
the
cookies
that
you
have
learned
you
have
to
on
the
client
side.
You
have
to
delete
them
because
they
are
not
valid
anymore,
because
you
are
not
sure
that
you
are
using
the
same
ip
address
with
your
final
server.
But
this
is
part
of
the
client
implementation.
C
I
L
You
can
chain
and
I
think
it
would
be
really
helpful
in
a
draft
to
have
sort
of
a
walkthrough
of
the
process.
They
say
a
mobile
phone
client
trying
to
use
this
feature
and
wants
to
go
to
su
TPS
youtube.com.
How
does
he
get
the
converter
address
and
how
does
he
initiate
those
connections
and
I
think
that
will
help
a
lot
of
this
sort
of
related
problem
of
middle
boxes
and
deployment
schemes,
because
right
now
we're
trying
to
infer
that
based
on
what
you
presented
and
that's,
why
you
get
so
many
questions
about
yeah.
C
So
we
can
write
this
as
an
appendix,
but
there
are
multiple
deployments
that
are
possible,
so
we
can
find
an
example
where,
for
example,
you
would
have
a
DNS
name
that
corresponds
to
the
converter.
But
this
is
not
the
only
possible
deployment,
and
so
we
can
just
provide
that
as
an
example.
But
there
are
many
deployments
that
are
possible.
Yeah.
A
C
So
the
main
issue
is
on
the
convertor
side.
If
you
want
to
implement
that
solution,
when
we
receive
a
sin,
we
don't
want
the
TCP
stack
to
reply
immediately
with
a
cynic,
so
we
want
to
delay
the
acceptance
of
the
sin
until
we
have
accepted
the
connection
from
the
server
side.
So
this
is
not
totally
a
use
of
space
implementation,
so
you
need
to
extract
the
sin,
put
it
in
a
buffer.
C
C
O
B
S
S
S
So
hello,
this
is
going
to
be
a
very
short
presentation
about
an
idea
that
we
wrote
with
David
Orman
next
slide.
Please
so
I
assume
that
you
have
read
previous
versions
of
this
document.
Essentially,
if
you
look
at
793,
there's
essentially
a
bug
in
the
sequence,
number
validation,
essentially
with
the
rules
that
we
have
in
RFC
793,
there
are
three
scenarios
that
would
actually
fail.
One
of
them
is
the
TCP
simultaneous
open.
S
Essentially,
we
accommodate
Windows
groups
of
one
bytes,
whereas
in
the
case
of
Linux
they
do
like
a
more
let's
say,
more
general
test
in
which
they
allow
for
simultaneous
zero
Windows
proofs
of
any
size.
Not
just
one
bite,
I
mean
one
byte
is
the
de
facto
standard,
but
they
accommodate
like
Windows
zero
window
groups
of
any
size
next
slide.
S
S
K
This
particular
button
is
Matt
Mathis.
This
particular
group
of
bugs
are
in
the
category
of
things
that
caused
me
to
may
not
encourage
being
about
any
new
implementers
that
there's
a
lot
of
really
important
details
and
implementation
that
are
not
written
down.
You're
getting
them
written
down
would
be
a
major
step
forward
and
actually
get
a
completely
self
consistent
description
of
the
protocol
itself.
H
So
this
is
Michael
speaking.
That
is
certainly
one
option
that
we
could
discuss.
However,
formerly
when
we
adapted
7n
Cerebus,
we
actually
decided
that
we
will
only
adopt
changes
that
have
ttpm
consensus
or
even
IETF
consensus
to
be
precise,
so
we
would
violate
our
procedure.
Having
said
that,
it's
certainly
one
option
and
that
we
have
as
far
as
I
know,
ves
has
at
the
moment
an
informational
comment
on
this
problem.
Already
in
some
non
Cerebus
I
mean
changing
the
equation.
H
According
to
our
previous
consensus,
we
would
actually
need
a
standard
strike
specification
for
it.
As
I
said,
we
can
discuss
what
we
do
exactly,
but
if
we
would
change
seven
months
rebus
in
this
respect,
we
would
probably
have
special
care
specifically
on
this
change,
because
we
don't
have
the
RFC
for
this
specific
thing.
So.
L
H
I
mean
for
sure
479
Cerebus.
The
most
important
thing
would
be
to
agree
on
if
other
than
acknowledging
that
there
is
a
problem,
if
there's
agreement
on
on
one
solution,
several
solutions
and
possibly
on
recommendations
to
implementers,
because
it's
already
understood
there's
more
than
one
solution
to
the
problem.
S
It's
not
that
it's
wrong.
The
thing
is
that
you
could
actually
send
Windows
proofs
of
any
other
size.
So
it's
like
a
the
one
by
window.
Probe
is
a
de
facto
standard,
but
nobody
requests
you
or
requires
you
for
this
zero
window
prove
to
just
be
exactly
one
byte.
You
could
do
zero
window
proofs
or
five
bytes.
If
you
wanted.
S
No
actually,
what
we
proposed
in
our
document
is
to
accommodate
this
see
no
window
proofs
of
one
byte.
Okay,
now,
when
it
comes
to
see
to
see
the
window
proves,
there's
nothing
that
requires
you
that
they
had
to
be
one
byte.
So
what
Linux
did,
which
is
not
what
we
are
necessarily
recommended
is
accommodated.
We
see
no
windows
probes
of
any
size,
that's
not
there.
S
We
actually,
what
we
do
in
the
document
is
just
accommodate
the
case
of
a
one
byte
0
window
probe,
because
that
that's
the
de
facto
standard
but
Linux
they
made
it
more
general
I,
guess
that
they
said
ok.
Well,
if
there's
no
requirement
that
those
have
to
be
one
byte,
then
we
should
accommodate
0
windows,
proofs
of
any
size
and
that.
L
S
You
have
a
problem.
The
problem
that
we
have
now
is
that
there
are
three
scenarios
that
fail
with
a
car.
What
the
text
that
you
have
in
793,
those
are
simultaneous,
opens
simultaneous
clothes
and
simultaneously
the
windows
proves.
So
that's
the
problem.
We
had
that
we
are
trying
to
solve.
Now
we
have
one
proposed
workaround
for
that,
which
you
know
solves
all
of
those
problems
accommodates.
You
know,
Windows
proofs
of
one
byte.
Those
are
the
different
two
standard,
okay,
now
Linux,
when
they
fix
it.
L
B
Micro
tricks,
unless
an
individual
I
think
what
this
document
is
missing
right
now
is
documenting
the
solutions
used
by
several
operating
systems.
So
you
describe
Whitely
how
Linux
is
doing
it,
but
I
think
it's
agreed.
It's
a
problem
in
the
specification
we
don't
know
what
all
the
implementations
are
doing,
so
I
would
suggest,
reach
out
a
couple
of
implementations
and
document
what
they
are
doing:
I'm
willing
to
offer
help
for
FreeBSD,
but
reach
out
also
other
ones
and
document
this
in
this
industry.
B
S
B
I
B
P
Lot
of
tigers,
I
think
that
fixing
the
bakken
in
793
is
sort
of
a
no-brainer,
and
we
should
we
should
do
this
and
whether
that
means
that
we
do
Narada
against
793
after
we
request
consensus,
what
should
be
in
the
Nevada
or
if
we
put
it
in
793
piss,
which
will
eventually
obsolete
793
I,
don't
really
care
right,
but
I
think
fixing.
This
is
a
good
thing.
P
I
would
also
sort
of
I
think
this
is
going
towards
what
the
previous,
because
I
said,
it
would
be
nice
if,
whatever
we
did
wouldn't
render
major
strikes
immediately
out
of
compliance,
because
you
know
we
are
late
here.
They
have
all
fixed
this.
So
if
we
can,
if
you
can
write
text
that
the
documents,
what
mail
major
stacks
have
already
done
and
and
doesn't
force
them
to
change
what
they
have
been
implementing
for
the
last
couple
of
years,
that
would
be
nice.
P
So
that
means
allowing
certainly
the
one-bite
thing
that
vsts
are
doing,
but
maybe
also
permitting
the
multiple
bites
thing
that
linux
has
been
doing,
not
that
linux
lengthy,
would
really
care
a
lot
I
think
if
they
all
of
a
sudden,
didn't
follow
this
new
RFC
anymore,
but
it
would
be
nice,
I
think
and
similarly
to
Michael's
point
on
understand
what
the
stacks
are
doing
before
we
make
a
recognition.
This
is
step
one
to
that,
but
yeah.
This
seems
like
a
no-brainer
to
to
do,
and
it
shouldn't
take
very
long
quote.
Unquote.
B
Okay,
so
I
think
the
the
question
was
adopting
this
as
a
working
group
document,
so
my
view
is:
we
should
first
get
the
document
getting
information
about
implementations
and
and
this
stuff
I'm
not.
We
are
not
asking
about
working
up
adoption
right
now.
What
we
would
like
to
have
a
show
of
hands
who
is
who
thinks
that
it's
good
to
address
this
area,
this
problem
space
or
who
things
it
doesn't
matter
so
can
we
have
a
show
hands
on
our
Fennell?
S
Yeah
the
connection
with
that
I
just
want
response
to
would
large
Lars
mentioned.
I
agree
with
that,
and
you
know,
while
the
document
right
now
proposes
like
world
work
around
the
to
you
know,
we
could,
you
know,
propose
like
to
alter
they
walk
arounds
and
that's
fine
regarding
you
know
whether
these
should
be
fixed
or
not.
I'm,
curious
myself.
If
you
could
actually
move
seventy
seven
hundred
ninety
three
bees
to
standard,
if
you
actually
don't
fits
it,
because
you
have
the
requirement
to
actually
face
known
bugs.
H
On
that
one,
as
I
said,
I
think
there
is
already
text
on
this
known
issue
here
and
so,
and
the
only
thing
that
at
the
moment
is
missing.
I.
Think
in
seven
landscape
is,
is
a
change
of
the
equation,
so
I
mean
I
think
the
text
already
states
that
there
is
a
known
problem
there
and
there
are
known
solutions
for
workarounds
and,
as
this
community
could
agree
that
that's
good
enough,
but
can
you
actually
move
the
document
to
full
standard?
If
the
document
has
no
problems?
H
Well,
we
my
understanding
of
the
process,
is
that
Semillon
Cerebus
would
be
proposed
standard,
but
I,
don't
know
what
the
process
sort
of
fine
but
I
think
it
doesn't
move
directly
to
full
standard.
If
it's
a
bit.
Ok,
but
anyway,
I
mean
this
document.
Actually
it
has
a
different
scope
that
not
a
seven
lines
rapist
so
because
it
also
removes
a
lot
of
content
in
there.
It's
only
the
base,
specs,
so
I,
don't
think
we're
safe
from
our
problem
here.
But
of
course,
I
mean
having
a
back
in
a
dis
document
is
certainly
not
perfect.
K
Given
that
it's
clear
that
there
excuse
me
that
the
that
there
are
bugs
in
793
excluding
them
from
the
scope
of
793
bist
seems
kind
of
artificial.
It
is
what
the
words
were
written
back
when
the
planning
was
done.
But
this
bug
feels
to
me
like
it's
below
the
horizon
for
being
a
scope,
change
blow
the
limit
for
a
scope,
change
and
important
enough
to
do
that
as
an
exception.
I
really
am
uncomfortable
with
the
idea
of
leaving
793
containing
a
bug
that
we
know
about
that.
We've
known
about
for
too
decade.
B
F
Hello,
everyone:
this
is
a
talk
on
how
the
experience
with
deploying
fast
open
on
the
internet
has
been
going
so
far,
I
present
with
some
information
that
the
last
IETF
this
is
more
update.
Since
then,
can
we
go
to
the
next
slide?
Please
a
quick
update,
so
we
enable
this
by
default
in
the
fall
creators
update,
which
has
been
out
in
the
market
for
a
while.
Now
it
is
the
significant
majority
of
all
Windows
10
devices
at
this
point,
so
the
way
we
could
enable
fast
open
was
to
build
a
fallback
algorithm.
F
So
when
you
know
there
are
middle
box
problems,
how
do
you
safely
recover
from
those?
So
for
that
first,
we
built
a
middle
box
that
simulates
all
those
known
problems,
so
it
also
serves
as
kind
of
a
regression
test.
If
you
want
details
on
what
kind
of
test
cases
we
added
it's
in
the
previous
presentation
so
and
then
we
ended
up
implementing
a
passive
probing
algorithm
when
I
say
passive
probing.
This
means
there's
actually
using
active
user
traffic.
F
I
should
have
used
the
word
active,
but
it's
actually
using
user
traffic
to
figure
out
if
TFO
works
on
the
network.
Last
time
we
found
that
around
26%
devices
were
successfully
using
Tier
four
and
did
not
fall
back,
and
then
we
also
did
a
AV
test
to
see
if
there
was
any
correlation
statistically
significant
code
with
page
navigation
failures,
and
we
couldn't
find
any
problems
which
mean
which
meant
that
the
algorithm
was
actually
working.
Fine,
we
did
find
that
you
know
there
were
failures
that
were
correlated
with
both
specific
networks.
Geographies
next
slide,
please.
F
This
is
again
a
recap
of
that
fallback.
Algorithm
I
wanted
to
quickly
go
over
this,
so
this
is
limited
passive
probing.
So
the
goal
here
is
to
not
impact
user
experience.
So
when
we
do
probing,
we
want
to
make
sure
that
we
don't
sacrifice
too
many
connections.
So
this
is
the
way
this
works.
Is
that
because
establishing
that
TFO
does
work
on
the
network
requires
multiple
connections
to
the
same
server,
which
means
you
need
multiple
probe
connections,
but
we
only
allow
one
probe
connection
to
go
through
at
a
given
time
on
a
particular
network.
F
F
The
other
important
thing
is
that
we
wait
for
the
connection
to
be
close,
reach
the
closed
state
before
we
can
figure
out
whether
the
probe
was
successful,
and
then
it
has
to
match
all
these
other
conditions
that
you
know
no
reset
was
received
in
response
to
sin.
No
sin
time
out
the
connection
didn't
fully.
Timeout
data
was
exchanged
in
both
directions.
The
connection
wasn't
cancelled
by
the
application
and
no
certain
are
TT
increased
during
any
time
in
the
connection
lifetime.
F
F
There
are
some
shortcomings
here.
The
the
goal
here
was
to
be
very
conservative.
We
wanted
to
avoid
any
problems
with
the
user
experience,
so
the
algorithm
was,
you
know
very
conservative.
One
of
the
problems
is
that
the
sin
timeout
heuristic
is
actually
a
large
fraction
of
you
know
the
fallback
use
cases
we
see
in
terms
of
you
know.
If
you
look
at
the
pie
charts
the
sin
timeout
is
is
a
pretty
large
chunk
of
it,
and
that
could
be
because
there
is
just
connectivity
issues
and
we
also
don't
persist.
F
We
also
assume
that
the
problems
are
closer
to
the
client
and
not
the
server,
and
then
there
are
possibly
long
delays
before
we
can
figure
out
50
over
actually
works
like
if
you
have
a
long-running
HTTP
2
connection,
then
it
for
it
to
reach
the
flow
state.
It
might
take
a
very
long
time
and
then
you
would
delay
enabling
fast
open
and
for
that
duration
because
of
the
CMOS
order
that
I
described
earlier.
F
The
other
cases
like
you
are
the
connection
to
could
take
a
very
long
time
out
of
the
outlet.
Ii
was
very
high.
There
is
also
the
problem
of
the
the
worst
case
middle
box,
like
the
middle
box,
just
sees
the
TFO
cookie
request
and
then
just
blocks
all
traffic
from
that
IP
address.
That's
like
the
absolute
worst
case
for
tier
4,
and
this
algorithm
doesn't
really
help.
You
know
the
user.
Experience
is
still
bad.
If
that
happens,
an
X
light,
please.
F
So
in
the
next
version
we
are
actually
making
the
algorithm
more
aggressive.
Firstly,
we
will
persist
now
both
success
and
fallback.
So
if
we
do
see
probes
succeed
on
a
particular
network
that
is
kind
of
remembered
forever,
until
until
we
decide
to
you
know,
object
the
operating
system
again.
Essentially,
we
also
will
start
off
every
device
to
probe
again,
because
this
is
a
different
algorithm,
so
we
are
starting
with
a
clean
slate.
F
So
when
this
update
goes
out
to
the
devices,
they
will
all
start
probing
again
the
because
the
same
timeout
was
so
aggressive,
we're
actually
now
only
turning
off
TFO.
If
the
syn,
with
the
TF
option
failed
in
the
subsequence
in
succeeded,
this
kind
of
tells
you
that
it
was
not
because
of
you
know,
network
connectivity.
It
was
because
of
the
option
and
the
probing
is
because
of
this
change.
Now
the
probing
does
have
to
be
restricted
to
Internet
connected
networks.
You
could
exercise
tear
flow
in
a
private
network.
If
you
wanted
to.
F
One
more
case
that
Rose
found
is
that
in
some
places,
TCP
options
are
just
reflected
back
to
the
client.
So
if
you
send
a
cookie
request,
the
server
sends
you
back
the
cookie
request:
option
I
mean
that's
just
weird
deployment,
so
we
just
again
use
this
as
a
condition
to
turn
turn
the
feature
off
next
slide,
please
some
more
data,
so
this
is
kind
of
the
overall
population
right
now
retail
devices
out
there.
This
is
from
the
fall
Korea's
creators
update
with
all
the
spring
creators
update
is
not
yet
deployed.
C
F
F
A
retransmission
yeah
so
that
sin
has
to
succeed.
It's
a
retransmitted
sin
yeah
without
the
okay,
yeah,
very
cool.
Thank
you
so
off
those
devices,
then
we
find
that
overall
about
46
percent
succeed
and
54
percent
fall
back.
So
if
you
look
at
like
your
overall
population,
it's
about
like
21
percent,
because
the
number
of
devices
have
grown.
The
other
shortcoming
I
did
not
explain.
Was
that
because
we're
not
persisting
success,
it
is
possible
that
if
they're
correct,
if
T
failures
later
the
overall
device
population
might
taper
over
time
because
we're
not
persisting
success.
F
We
do
find
that
there
are
some
really
poor
success
rates
in
some
geographies
China,
for
example,
only
3%
devices
that
completed
proving
succeed
with
Tier
four
and
in
India
it's
about
18%,
though
not
great
numbers,
and
then
we
repeated
an
a/b
experiment
on
the
retail
population,
which
is
we
have
a
much
larger
device
population
right
now
and
again,
no
significant
correlation
with
page
navigations,
which
tells
us
that
the
algorithm
is
working
as
intended.
Next
slide,
please,
looking
ahead,
this
algorithm
is
really
complex,
so
what
we
are
looking
to
do
is
whether
we
can
simplify
this.
F
It
seems
to
us
that
you
know
one
of
the
better
ways
of
doing
this
would
be
to
actually
use
active
probing,
which
means
that
you
know
we
don't
end
up
using
any
of
the
user
traffic
to
figure
out
if
tier
4
works.
This
solves
those
three
problems
which
we
haven't
addressed
yet,
which
is
like
it
could
be
long
delays
before
you
figure
out.
If
tier
4
works,
you
know,
user
experience
could
still
suffer
if
something
bad
happens,
and
then
this
simplifies
the
whole
algorithm
and
then
the
the
other
remaining
problem
is.
F
F
One
of
the
important
things
I
wanted
to
ask
is
that
other
browsers
should
now
consider
enabling
Tia
phone
windows,
because
the
operating
system
has
fallback
algorithm.
The
browser's
do
not
need
any
kind
of
complex
logic
to
detect.
If
this
feature
works,
the
safety
is
provided
in
the
TCP
stack
itself,
so
it
should
be
pretty
safe
for
browsers
to
turn
this
on
and
rely
on
the
built-in
for
all
back
algorithm
in
the
operating
system.
F
The
questions
I
want
to
try
and
with
is
seven
four
one
three
seems
like
an
important
RC
to
me,
given
that
you
know
yes,
I
realize
that
quic
is,
you
know,
gaining
more
traction,
but
then
there
are
networks
where
it
will
not
work
and
you
would
want
to
achieve
zero
oddity
with
TCP.
So
it
seems
to
me
like
something:
I
would
recommend
that
this
group
consider
whether
to
make
it
Sanders
track
and
then,
if
we
do
go
along
that
path,
it
might
be
useful
to
document
possible
fallback
algorithms
for
dealing
with
metaphors
problems.
F
Jenna
Inger,
thanks
of
the
presentation
that
data
is
great,
there's
a
quick
question
on
the
data
and
the
previous
slide.
If
you
can
go
back
to
that,
you
so
I
know
that
there's
one
particular
error
that
the
Apple
folks
encountered
and
I
don't
know
if
you've
thought
about
how
you
might
be
able
to
provision
for
that-
and
this
was
when
a
TF
of
sin
was
sent,
the
entire
IP
got
black
hole.
Yeah.
Have
you
thought
about
that
yeah
so
I
actually
covered
that
as
a
shortcoming
with
this
fallback
algorithm.
We
cannot
recover
from
that
case.
F
I
F
We
are
actually
not
seen
any
kind
of
problems
in
the
deployment
so
far,
which
means
you
know.
Such
a
bad
case
is
actually
not
happening
out
there.
All
right
it
might
be
good
to
see
if,
if
it's,
if
it's
an
a/b
experiment,
if
the
total
number
of
connections
that
are
failing,
it
is
in
fact
different,
like
the
total
population
of
successful
connections
with
fall
back
without
fall
back
every
yeah.
M
F
We
correlated
this
with
navigation
failures
in
the
browser
and
there
is
no
such
statistics
statistically
significant
correlation,
so
which
means
yeah.
It's
not
like
turning
onto
your
first
causing
pages
to
start
failing
to
load,
that's
good
to
know,
although,
unfortunately,
the
Apple
experience
still
seems
to
suggest
that,
even
if
it
happens
on
a
tiny
fraction
of
cases,
it's
quite
it's
fatal
effectively.
F
F
F
F
G
R
R
For
one
example,
is
we
found
one
mailbox
you
send
us
in
with
data
it
gets
acknowledged.
You
can
send
a
lot
of
data.
You
can
receive
gigabytes
of
data
until
you
stop
sending
that
the
traffic
stops
for
10
seconds
after
data
connection
breaks,
and
it
was
due
to
TFO
like
that,
which
is
why
we
we
on
our
side.
We
are
still
very
conservative
and
don't
there
yet
to
enable
it
globally.
Oh.
If
you
experience
I.
F
F
F
H
K
F
F
Gen-I
and
god
I
think
I'm,
partly
echoing
what
Matt
just
said,
which
is
we
can't
I,
don't
think
it's
sensible
to
even
look
for
100%
coverage.
It's
completely
reasonable
to
have
some
failures
and
I
think
that's
natural
normal
and
it
ought
to
be
expected,
but
I
still
think
it
should
be.
I
mean
it'll,
be
an
experiment
and
I.
Think
the
the
fallback
is
absurdly
critical,
but
I
yeah,
I,
guess
I.
Just
want
to
record
matt
said.
O
Is
your
see
from
from
macro
Mike
about
the
making
RFC
74
awesome
to
post
one
three
proposed
standard
I
have
little
concern
about
it,
because
no,
we
cannot
use,
therefore
any
kind
of
TCP
connection.
If
we
can
be
used
with
T
areas.
That's
fine,
but
it's
sometimes
not
the
case.
Then
we
have
a
problem
for
it.
Impotency
and
draft
74
fun.
Something
already
mentioned
such
concern
para.
If
we
make
it
proposed
standard,
we
have
a
very
sorry
little
guy
trying
or
applicability
without
such
things,
it's
too
risky
to
make
it
proposal
standard.
That's
my
opinion.
F
L
Junction
here,
I'm,
not
sure
I
agree
your
point
about
TFO
being
a
standard
because
when
he
avoids
proposed,
as
the
experimental
people
already
make
it
very
clear
that
I
have
two
documents
that
the
TFO
requirement,
particularly
barb
here.
That
TFO
is
only
good
for
item
potent
applications
and
that's
already
clearly
documented.
So
if
you
want
further
warning,
we
can
improve
that
I.
Don't
think
this
should
be
a
barrier
to
the
standard
tract
and.
J
F
By
definition,
all
of
the
work
we
do
is
limited
applicability,
Jahangir
I
had
a
question.
Actually
I
did
want
to
also
ask
something
else:
I
forgot
earlier
about
the
data.
It's
very
interesting
that
it's
the
that
you
have
such
a
ridiculous
skew
in
geography.
You
said
only
3%
of
connections
were
successful
in
China
and
18%
in
India.
Do
you
have
any
more
insights
into
this
because
that
seems
like
that
seems
surprising
to
me.
18%
is
a
ridiculously
low
number,
but.
F
Say
is
that
it's
not
like
one
ISP?
We
see
this
across
networks
in
those
geographies,
so
it's
probably
some
middle
box.
It
could
be
the
firewall
I,
don't
know
we
don't
have
those
details
at
this
point.
Do
you
have
data
on
the
type
of
failure
that
was
encountered?
Like
was
a
dissent
which
of
the
which
of
the
failures
that
you
listed
up.
There
was.
F
Ietf,
because,
right
now
the
the
one
of
the
problems
is
that
sin
time
out
here
restrict
which
is
because
of
just
bad
networks.
You
could
hit
that
all
the
time,
and
that
would
be
one
reason
why
the
success
rate
is
so
bad,
and
now
that
is
going
away
in
the
next
update,
we
might
get
different
numbers
and
yeah
I
will
get
back
to
you.
I
had
a
similar
thought
about
the
the
subsequent.
No,
this
no
subsequent
but
connection
timeout
after
handshake
sacked
after
handshake
success,
which
could
also
be
something
too
mind.
F
It
might
be
I,
don't
know
how
to
think
about
beliefs
in
the
data
it
might
be
worthwhile.
Look
not
normalizing
that
against
what
the
normal
connection
timeout
is
for
that
regime
for
that
region
or
for
that
particular
grouping
whatever
you
have
just
to
understand,
if
that's
actually
a
causal
or
not
yeah,
that's
a
fair
point
so
again
like
the
goal
here
is
to
be
not
very
aggressive,
because
we
are
using
user
traffic
once
we
do
active
probing.
F
T
F
Don't
so
we
we
have
a
custom
implementation
of
happy.
I
was
not
the
algorithm
in
the
RFC
for
ipv4
versus
v6
for
TF
over
there
is
no
happy
I
was
because
yeah
we
just
used
the
user
connection
to
to
probe.
So
if,
if
the
connection
was
v6,
which
is
preferred,
usually
we
would
do
the
probe
on
the
v6
connection,
say.
B
U
My
co-author
Tim
presented
a
draft
called
invite
signaling
for
transport
us
at
last
ITF,
so
one
suggestion
will
receive
from
lhasa
ETF
was
to
write
a
separate
congestion
control
chart.
So
here
we
are
here.
These
are
the
prerequisites
to
use
this
new
congestion.
Control
algorithm
is
not
meant
to
replace
existing
TCP
congestion
control
so
is
only
to
be
used
for
applications
which
has
really
like
high-bandwidth
low-latency
requirement
that
current
tcp
Canal
satisfy
and
then
together
with
the
invention
link,
then
you
can
use
this
new
new
algorithm,
so
two
important
prerequisite.
U
First,
you
have
to
guarantee
the
bandwidth
before
the
data
transmission.
You
can
either
use
out-of-band
or
invite
signaling
and
then
the
second
you
om
data
is
used
constantly
to
monitor
network
state
status,
more
important.
It
used
to
offer
to
monitor
the
queue
taps
once
it
reaches
the
pre
config
threshold.
It
will
send
an
alarm
to
say
to
indicate
that
the
network
might
be
congested,
so
they
slices
the
congestion
window.
U
Comparison
between
the
proposed
algorithm
and
the
TCP
Reno,
so
important
thing
to
notice
is,
since
we
have
the
Ballet's
guarantee,
so
we
don't
have
slow
start
in
case
of
tcp
start
our
faster
recovery,
the
transmission,
the
congestion
window,
can
jump
to
say
our
rate
right
away,
and
then
the
OEM
alarm,
together
with
the
duplicate
Ike,
is
used
together
to
indicate
a
congestion
in
that
case,
when
that
happens,
with
the
congestion
window
size
only
chops
to
the
ER.
U
Okay,
so
this
is
summary
of
the
important
changes,
so
it's
used
for
Perez
guaranty
networks.
We
don't
have
slow
start
but
young
to
see
our
directly
and
in
case
so
the
congestion
window,
size
stays
between
Sarah
and
MPR
during
congestion
avoidance
and
important
om
is
used
to
indicate
whether
network
is
in
congested
or
not.
U
B
Thank
you.
This
was
really
on
time.
Our
scale
is
now
over.
So
that's
why
I
would
suggest
put
the
get
there
discussion
on
the
mailing
list.
It
already
started
there
and
then
this
concludes
this
meeting.
Thank
you
for
attending,
see
you
in
Montreal
and
if
you
haven't
signed
the
blue
sheets,
do
it
now
and
bring
them
back.