►
From YouTube: IETF100-TCPM-20171116-0930
Description
TCPM meeting session at IETF100
2017/11/16 0930
https://datatracker.ietf.org/meeting/100/proceedings/
A
A
Like
this,
okay,
okay,
let's
get
started
hello,
so
welcome.
This
is
TC
p.m.
working
group
meeting.
Just
you
make
sure
you
are
in
the
right
room.
Okay,
my
name
is
Yoshi
from
in
Ishida
one
of
the
co-chair
of
the
TCP
and
working
group,
and
this
is
also
Michael
show
and
a
coach
of
disappearing
working
group
and
unfortunately,
Michael
Jackson
cannot
come
to
this
meeting,
but
we
expect
he
will
come
next
meeting.
A
Okay,
this
isn't
user,
not
oh
well,
I
think
you
are
in
the
middle
with
it.
Basically,
what
you
say
in
this
meeting
will
be
governed
by
is
not
well,
and
if
you
want
to
take
a
look
kaveri,
you
please
go
to
the
ITF
webpage
or
you
can
google
it
on
the
and
then
you
can
find
a
notable
very
easily
and
not
taking,
and
I
appreciate
no
gory
to
taking
of
note
taking
and
the
media
thanks
for
each
other
and
before
we
start
a
meeting.
A
B
A
Draft
name
so
that
Machias
can
track
the
status
of
the
you
addressed
okay
moving
on
so
this
is
the
agenda
of
that
state.
Meeting
at
first
cheers
we'll
talk
about
working
group
status,
and
after
this
we
have
for
presentation
for
working
group
items.
First
name
we'll
talk
about
our
the
back
off
drug
after
this
Marcel
and
Bob
will
talk
about
generalized
ECN
and
after
this
Bob
and
the
media
will
talk
about
accurate,
easy
and
draft.
And
after
this
are
you
tuned
we'll
talk
about
TCP
rap
and
after
this
a
we
will
talk.
A
A
Okay,
and
we
appreciate
your
cooperation
to
process
these
draft,
and
then
we
have
several
active
working
group
document.
So
first
one
is
alternative
back
off
easy
and
draft,
and
this
is
one
of
the
today's
agenda
and
in
the
last
meeting
we
cuts
several
reviewers
Ferranti
reviewers
for
this
draft
and
then
we
started
receiving
the
reviews
from
the
reviewers
and
which
pretty
nice
so
I
think
this
Trust
is
getting
very
much
worth,
and
so
after
you
know,
we
received
another
one.
A
My
review
was
something
and
then
we
can
think
and
device
a
draft
and
then,
if
everything
goes
smoothly,
I
think
we
can
proceed
to
the
working
group
last
call.
So
our
expectation
is,
you
know,
within
a
couple
of
within
a
month
or
so
we
maybe
able
to
proceed
to
working
with
rusticles.
That
is
our
expectation
and
the
next
one
is
accurate,
EEG
and
draft.
This
is
also
one
of
the
agenda
of
that
today
and
this
drug
has
been
updated
very
recently,
so
I
think
you
know,
let's
discuss
in
the
meeting
and
generalized
issue
yen.
A
So,
let's
discuss
in
the
meeting
and
II
do
draft
I
think
this
draft
has
been
quiet
for
a
while,
but
according
to
the
authors,
also
has
some
plan
to
do
some
experiment,
and
so
we
expect
they
also
will
come
back
with
new
bottom
of
the
track,
with
some
result
about
our
P
or
consider
I
think
this
draft
is
getting
very
matured,
but
in
order
to
finalize
this
drought
we
have
to
sort
us
some
point
and
the
main
point
of
the
discussion
is
to
keep
consistent
consistency
with
the
related
dogs,
because
RTO
consider
had
some
related
dogs
such
as
RFC,
80,
85
or
RFC,
62,
98
or
RC
793
or
793
B's.
A
So
we
have
to
make
sure
each
drug
do
not
contradict
each
other,
so
we
try
to
make
some.
No.
We
have
the
chance
of
starting
discussion
with
on
this
drought
and
then
try
to
make
some
proposal
to
the
authors
of
some
that
text
that
do
not
cause
any
conflict
with
our
document.
That's
about
you
know
right
thinking
right
now
so
and
then,
if
that
doesn't
work
well,
I
think
we
might
think
about
other
brands.
But
right
now
we
try
to
focus
on
to
make
some
proposal
to
the
authors
so
which
can
also
agree.
A
Well,
we
did
working
a
working
group
adoption
code
and
then,
as
a
result,
I
this
draft
has
been
adopted
and
so
right
now
we
are
trying
to
prepare
the
form
of
this
draft.
And
but
there
are
some
discussion
between
the
author
and
then
Cheers
and
then
the
chairs
know
made
some
type
of
proposal
and
then
I
think
they
also,
you
know
agreeing
some
one
of
my
our
proposal,
so
our
expectation
is,
they
also
will
update
trust
based
on
our
suggestion.
A
That's
the
current
status,
any
questions
so
far,
okay,
moving
on
and
then
793
B's
draft.
Unfortunately
ways
general
come
to
this
meeting,
but
we
got
some
text
from
him.
So
I
will
talk
about
it
on
behalf
of
the
author,
so
as
you
might
notice,
their
3d
button
has
been
submitted
about
four
days
ago
and
there
are
several
open
discussion
item
in
this
draft,
so
always
open
this
start
opening
the
thread
on
the
mailing
list.
A
With
the
following
entitles,
so
the
straight
will:
if
you
try
to
open
for
straight
I,
think
and
then
after
you
know,
we
settle
down
or
opening
open
discussion
items.
New
budget
will
be
submitted
in
pin
saber
closing
all
the
item
mark
to
do
in
the
draft.
That's
the
plan,
okay,
any
questions
so
far,
okay,
moving
on
and
then
finally,
I
would
like
to
talk
about
MP
TCP
combat
a
draft
so
right
now
chairs
are
having
a
plan
to
run
adoption
call
on
MPT
CP
combated
draft.
A
So
to
be
clear,
we
are
going
to
run
adoption
core
and
then
we're
an
adoption
call
on
this
draft
and
the
if
it's
accepted
well
supported
and
participate
combat.
The
draft
will
be
a
working
group
item
of
booty,
CPM
working
group.
That's
the
prong!
So
really
briefly
describe
the
background
of
this
draft,
and
so
this
draft
first
name
is
a
draft
of
phenomena
and
PTC
become
butter.
This
drug
has
be
presented
in
the
rust,
MPD
CP
meeting
and
then
this
draft
is
mainly
designed
for
to
assist
the
deployment
or
multipass
CCP.
A
But
if
you
look
at
this
throat
carefree,
if
you
look
at
the
architecture
in
the
draft,
this
draft
has
a
certain
generic
architecture
so
to
be
clear,
I,
don't
say
very
January,
quite
I
dont
say
razer
generic!
No,
because
this
will
be
a
part
of
the
decision,
so
it
has
some
generic
mechanism
so
and
if
the
architecture
in
this
raft
is
a
generic
winner,
we
can
apply
this
technology
to
other
tcp
extensions
such
as
this
think.
Maybe
we
can
apply
the
same
technology
to
this
thing.
That's
maybe
very
interesting
use
case.
A
So
if
this
draft
can
have
a
generic
mechanism,
it
will
barely
make
sense
to
have
this
draft
in
TCP
and
working
group,
and
so,
if
you
are
interested,
please
see
or
via
the
email
which
he
sent
very
recently
several
days
ago,
like
contains
lots
of
you
know,
useful
information
to
understand
that
or
his
draft
is
the
intention.
So
please
check
and
then
right
now
you
know
we
cannot
discuss
the
technical
detail
of
this
draft
but
know
we
would
like
to
share
some
opinions
on
learning
adoption
code
in
TCP
and
working
group.
A
C
Me
I
could
have
its
own
right.
Yeah
me
a
cool
event,
M
s
ad
for
both
of
the
groups,
so
I
would
just
like
to
repeat
once
again
the
request
for
reviewing
this
document.
Even
so,
the
use
case
is
MPT
CP
and
that's
a
use
case
that
makes
most
sense.
I.
Think
what's
needed
here
is
like
general,
TCP
expertise
which
is
in
this
room.
So
please
have
a
look.
D
D
So
Dave
back
so
to
be
clear,
I
would
not
I
would
not
try
to
make
this
draft
applicable
to
teeth,
applicable,
TCP,
Inc
I
think
that's
a
I.
Basically,
if
you
make
us
write
a
factor,
a
little
TCP
Inc
you're
describing
a
man-in-the-middle
attack
as
a
feature.
I
think
and
that's
not
good
I
prefer
just
leave.
Looked
at
each
being
left,
probably
left
left
alone.
C
A
A
E
E
Okay,
so
let's
try
it
so.
This
draft
is
basically
an
update
to
the
to
the
alternative
back
off
with
EC
n.
So
next,
please
in
the
last
ITF
three
persons
were
assigned
to
do
the
review.
The
Valen
volunteer
to
do
the
review,
so
we've
got
feedback
from
Roland
and
then
we've
got
also
a
couple
of
days
ago.
We
got
feedback
from
Lawrence
and
and
thank
you
to
them
thanks
to
them,
and-
and
we
have
incorporated
these
these
feedbacks
that
we
received
from
them.
E
So
we
have
submitted
two
revisions
over
the
course
of
the
time
between
the
last
idea.
From
now
the
revision
0-3
was
submitted
a
while
ago
and
and
I
just
submitted
the
revision
zero
for
last
night.
So
what
has
been
basically
happening?
We
have
tried
to
incorporate
all
of
the
comments
from
from
the
two
reviewers.
E
Basically,
we've
got
more
consistent
terminology
and
definitions,
so
there
were
things
that
were
basically
a
bit
more
hand
wavy,
and
then
we
changed
them
and
we
make
them
to
be
more
precise
and-
and
we
define
the
stuff
that
that,
were
there
more
precision
in
the
language,
we
distinguish,
for
example,
between
what
is
a
buffer
and
what
is
the
cue,
because
both
term
terms
were
used
in
the
draft
interchangeably.
So
so
we
provided
a
more
accurate
wording
there
also
everywhere,
where
we
we,
for
example,
I
mentioned
loss.
Now
we
wrote
as
an
inferred
packet
loss.
E
So
we
also
provided
some
clarifications.
One
of
these
color,
if
occasions
were
on
the
safety
of
the
mechanism.
We
explained
that
the
worst
case
scenario,
basically
with
with
alternative
back
of
it
ecn,
is
no
worse
than
the
standard
la
space
DCP
because
anyways,
let's
say
you
basically
don't
react
to
to
ecn
signals.
E
So
if
there
is
a
future
draft,
for
example,
and
specifying
a
CT
piece
condition,
control
behavior
then,
is
up
to
that
draft,
whether
to
adopt
the
alternative
back
off
with
e
CN,
with
with
their
specific,
better
value
that
they
might
want
to
use
or
any
other
type
of
congestion
control.
So
the
scope
of
this
draft
is
basically
limited
to
standard
tcp.
The
references
are
updated.
There
is
no
change
to
the
to
the
technical
content
of
the
draft.
Yes,
please
next.
F
It
was
gory,
fairest
and
I
was
commenting
as
TS
VW
GTI,
rather
than
co-author
and
I.
Think
that's
the
right
thing
to
do
with
things
like
SCTP
and
other
transports.
We
can't
make
a
normative
requirement
on
future
specs,
but
we
can
say
that
this
is
the
right
process.
Whether
we've
got
quite
the
right
words,
we
can
tweak
them
a
little
bit
if
we
need
to,
but
I
I
think
that's
the
right
thing
to
do
from
TSV
WGS
point
of
view.
F
E
Sure
so
so
I
think
we
agree
on
that,
so
this
is
basically
limited
to
to
what
is
in
3168,
and
so
the
references
are
updated.
There's
no
change
to
the
technical
content.
Next,
please,
and
so
we
have,
as
we
mentioned
previously
in
Prague.
We
have
submitted
a
patch
to
the
freebsd
upstream
over
the
course
of
this
time.
This
patch
was
being
reviewed.
Now
the
review
is
complete.
E
G
E
G
D
G
G
E
G
F
Clarification
here
and
okay,
but
the
fundamental
basis
for
the
aid
work-
and
this
go
affair-
has
does
an
individual
off
this
time
and
was
that
if
you
see
loss
ever
you
use
the
standard
TCP
Bally's
for
better
this,
the
change
is
only
when
you
see
an
ECCN
mark.
If
you
ever
see
lost
you
behave
as
a
normal
standard.
Tcp
would
repair
I.
G
Think,
like
yeah,
but
my
question
is
even
if
you
get
an
easy
n
mark,
so
this
is
deviating
from
what
the
original
RFC
said.
That
is,
there
is
a
potential
that
you
might
get
more
easy
on
marks
or
even
a
packet
loss
in
the
very
next
oddity.
So
my
question
is:
should
we
be
more
conservative
in
such
a
case?
G
C
E
C
E
F
E
Only
change
that
we
make
is
that
whenever
in
the
previous
like
like
everything
else,
is
the
same
in
RFC's,
the
only
thing
is
that
that
whenever
you
receive
an
e
CN
mark,
basically
you
go
by
0.8,
so
so
either.
Normally
you
would
react
once
per
our
TT.
So
so,
if
that's
the
case,
we
didn't
touch
anything
else.
So
then,
then
it
should
happen
that
you
would
go
down
by
this
by
0.8
and
but
then,
of
course,
you
may
subsequently
receive
a
packet
loss
in
the
next
hour.
E
E
D
H
F
Think
we
agree
it's
time
to
go
and
read.
The
code
normally
see
a
nun
for
a
because
it's
a
separate
bit
of
cord
surprisingly,
and
let's
just
check
we
Abe
does
not
Abe
does
not
change
it.
That's
all
I
can
guarantee.
But
the
question
is
what
happens
and
I?
Don't
know
what
happens
when
you
get
them
the
two
bits
of
cord
interacting
one
Maricopa.
I
J
J
Essentially
it
said
sin
recover
on
in
response
to
receiving
EC
and
Mark
and
does
not
apply
an
additional
back
off
if
a
loss
were
to
show
up
in
the
period.
So
we
can
alter
that
if
we
think
that's
appropriate
but
as
currently
implemented,
it
would
essentially
do
nothing
until
the
semi
recover
was
reached
and
then,
if
a
loss
showed
up
after
that,
we'd
have
a
50%
back
off.
But
at
moment
it's
only
80%
yep.
D
Sir
david
black
wax
rep
did
this
say
is
that
I
was
third
reviewer
I
finally
got
the
review
done
just
in
time
ie
in
the
last
12
hours.
It's
on
the
list.
It's
almost
entirely
to
troller
the
only
thing
that
might
have
a
little
bit
technical
content
to
it
is.
The
draft
has
a
very
strong
focus
on
recent
modern
atriums
like
pi
and
coddled.
However,
this
is
also
going
to
be
deployed
with
whatever
eight
terms
are
out
there
and
a
little
more
discussion
of
what
else
is
or
might
be
out
there.
B
A
K
Hi
Marcelo
annular,
here
speaking
for
the
authors
of
the
EC
n
plus
plus
draft
next
slide.
So
in
the
last
meeting
we
basically
have
one
open
issue
left.
That
is
what
to
do
with
respect
to
Purex.
So
there
is
a
strong
motivation
for
actually
ECT
marking
pirogues,
because
if
we
don't,
when
there
is
there
are
episodes
of
congestion,
Purex
will
be
more
likely
to
be
dropped
which
will
result
in
performance
impairments.
So
it
would
be
useful
to
mark
them
to
avoid
this
type
of
drugs.
K
K
The
issue
that
was
noted
in
the
previous
meeting
is
that,
if
we
in
the
case
that
there
is
an
endpoint,
that
is
only
sending
pure
acts
and
you
actually
respond
to
that
congestion-
you
have
a
bias,
a
biased
response,
because
the
only
thing
you
can
actually
have
is
response
of
your
acts
having
congestion
and
eventually
you
reduce
and
reduce
more
the
window,
and
you
have
no
opportunity
to
increase
the
window.
On
the
other
hand,
not
responding
to
congestion
to
a
congestion
signal
that
you
actually
receive
seems
like
a
like
the
wrong
thing
to
do
so.
I
K
Okay,
so
so
the
question,
the
question
is:
how
do
how
do
we
handle
the
situation
right?
So
after
some
discussion,
the
proposal
that
that
has
been
done
in
the
mailing
list
is
a
next
slide.
Please
is
to
only
ECT
marks
pure
acts
in
the
case
where
a
key
CN
has
been
enabled
in
the
connection.
The
reason
for
this
is
because
easier
occasion
actually
provides
a
more
accurate
information
and
reports.
K
The
number
of
packs
that
of
packets
that
have
been
see
mark
but,
and
also
a
number
of
bytes
that
encounter
the
congestion
right,
so
that
allows
the
congestion
control
algorithm
in
the
sender
to
actually
be
able
to
respond
to
a
match,
a
finer-grained
signal,
because
in
the
particular
case
of
products,
the
number
of
pack
of
bytes
that
encounter
congestion
will
be
zero
right
because
the
pure
drug
doesn't
contain
date
right.
So
this
is
basically
the
the
proposed
way
forward.
We
have
mentioned
this
in
the
mailing
list.
There
was
no
negative
feedback.
K
L
M
That
is
not
only
about
Asian
and
in
those
cases,
I've
seen
pretty
aggressive
proposals.
So
I'm
really
wondering
whether
we
need
to
deal
with
all
corner
easy
on
cases
correctly.
So
to
me,
I
think
there
could
be
ways
how
we
could
better
decouple
that
draft
here
and
the
equities.
You
want
there's
a
risk,
probably
that
some
information
gets
lost.
Some
odds
may
be
get
lost.
The
question
is:
why
would
we
do
we
really
have
to
care
about
that?
M
If
other
transports
are
doing
completely
other
things
and
as
I
said,
I
think
this
draft
could
be
simplified.
If
we
make
clear
how
this
could
be
deployed
independent
of
the
collision
and
how
it
could
be
deployed
with
segregation,
but
I
personally
would
suggest
to
clearly
define
a
mode
where
accurate
easy
and
it's
not
the
newest.
So.
K
M
M
K
K
Okay,
so
the
the
current
draft
clearly
defines
what
to
do
when
ecn
is
negotiated
and
when
is
not
right,
there
is
a
table
that
actually
describe
which
situations
are.
This
is
perfectly
describing
the
draft
I
think
is
I
mean,
maybe
it
maybe
there
is
some
editorial
effort
needs
to
be
done
to
convey
it
more
clearly,
I'm
happy
with
that,
but
the
information
inside
there.
K
The
best
approach
is
to
actually
have
two
documents:
back-to-back
one
with
accuracy
and
or
another
and
another
week,
without
accurate,
CC
and
or
to
have
the
integrated
version.
That's
an
editorial
discussion
that
we
can
have
and
I'm
happy
to
do
that
right.
The
first
thing
that
I
would
like
is
to
close
this
issue
and
then
talk
about
the
more
editorial
part.
So
the
thing
is
we
have
the
discussion
about
how
much
to
mix
this
with
accurate
CCN,
especially
for
the
scene
right.
The
scene
is
much
more
problematic
than.
K
M
M
M
K
Think
you
have
the
person
who
is
going
to
answer
that
right
now,
right
behind
you
all
right,
I
mean
the
thing
is
I
and
I
agree
with
her.
She
has
made
a
very
clear
statement.
That
is
an
architectural
statement
that
you
should
not
avoid
responding
to
explicit
congestion
signals
that
you
receive
it's
a
bad
practice
I
actually
subscribe.
What
what?
What
her?
What
her
statement
right
now.
M
K
M
F
B
K
But
but
the
problem
I
disagree
that
yeah
I
disagree:
that
accuracy
supporting
accurate
CCM
is
a
is
an
is:
what
complicates
the
problem?
The
problem
is
complicated
in
some
specific
cases
like
in
particularly
in
this
case,
all
the
bias
about
the
negative
feedback.
This
is
not
a
problem
of
accurate
ecn.
This
is
a
problem
right
accuracy
and
actually
provides
you
more
information
that
allows
you
to
go
to
a
better
solution
to
the
problem.
I.
F
Could
easy
an
is
fine
and
we
have
to
do
updates
to
implement
this.
We
have
to
update
student
accurate
easy,
and
could
we
roll
the
2
into
1?
Yes,
ok,
then
we
don't
have
to
just
focus
on
the
case.
We're
actually
easy
and
is
not
enabled
because
we
don't
have
to
do
the
stuff,
but
let's
just
think
about
so
the
equity
CN
bit,
but
myriad
Mira's
going
to
talk
about
that.
So
I'm
I
was
only
rippling
up
the
same
thing
and
saying
I'm
at
the
same
level.
F
I
understand
what
you're
saying
you're
saying
that
we
need
to
drop
all
the
part
that
are
not
accurate,
again
I'm,
saying
that
might
be
a
possibility.
This
document
should
be
easier
to
read.
It
is,
it
is
a
thing
we
want
people
to
do,
and
it's
quite
simple
so
either
we
say
just
use
equity,
CN
and
then
do
this,
because
that
makes
it
all
this
simple
as
one
update
I'm.
Ok
with
that,
if
that's
the
right
thing
or
somehow,
we
make
it
simpler,
because
the
current
text
is
hard
to
pick
out.
F
N
C
You
live
in
so
I,
agree.
I,
think
it
would
make
the
document
simpler
if
we
would
only
use
accurate,
easy
n
if
you
only
discuss
so
it's
a
little
bit
the
opposite
from
what
Micah
says,
but
I
think
that's
actually
right
thing
to
do
because,
first
of
all,
you
want
to
deploy
accurate,
easy
and
we
want
to
deploy
accurate,
easy
and
as
a
replacement,
to
easy
end.
It's
a
change.
So
we
can
do
these
two
changes
together
and
it
makes
it
much
more
safe.
So.
K
C
K
K
K
K
No,
you
don't
know
you
don't
really
need
that
good
I
agree,
I
mean
as
of
today.
What
the
draft
says
is
the
solution
for
the
for
feeding
back
the
con.
The
congestion
signal
to
the
to
the
sender
is
using
accurate
EC
end
because
we
don't
want
to
waste
more
baits
and
blah
blah
blah.
There
is
a
bunch
of
arguments
in
there
all
right.
You
could
try
to
do
something
else.
I
agree.
C
That's
more.
The
second
point
like
do
we
actually
care
about
inserting
being
marked
like
there
is
an
indication
of
congestion.
Do
we
need
to
react
to
it?
Is
it
important?
That's
your
question
right
say
the
difference
between
easy
and
losses.
That's
an
explicit
signal,
so
it
there
is
congestion
for
poor
loss.
C
You
never
know
for
sure
you
don't
know
why
the
sun-god
loss-
you
don't
know
if
it's
congestion,
you
don't
know
what
it
means
you,
but
you
also
have
a
timeout
right
if
you're
soon
as
gone,
you
have
to
wait
for
other,
so
it's
kind
of
a
penalty.
If
you
want
to
say
it,
but
it's
also
kind
of
there
is
some
time
where
you
don't
sense
anything
on
Lincoln,
so
the
condition
might
resolve.
In
this
case,
you
get
an
immediate
signal
that
there
is
congestion,
so
just
not
reacting
to
it
is
just
just
ignoring.
M
K
Let
me
suggest
a
way
forward,
so
I
understand.
We
all
agree
that
if
there
is
accurate
CN,
what
isn't
written
in
the
draft
is
the
right
thing
to
do
so.
What
I
would
suggest
is,
let's
limit
the
scope
of
this
draft
to
a
cure
edition.
If
someone
wants
to
do
this
for
something
other
than
a
cure,
it's
Ian,
please
write
a
draft
and
do
it
right.
K
L
K
But
because
we
also
say
that,
for
instance,
doesn't
matter
if
any
of
the
endpoints
support
accurate,
a
support,
accurate
is
yen,
I,
don't
know
retransmission,
you
should
mark
it
anyway
right
and
that
doesn't
matter
whether
they
are
or
they
are
not.
Accurate
is
yen
I.
So,
basically,
what
I'm
saying
is:
let's
drop
all
that
which
is
pretty
minor
part
of
the
text.
Just
keep
the
case
that
it's
accurate
is
yen.
The
sender
is
accurate
is
yet
right.
K
We
deal
with
all
the
compatibility
of
the
other
endpoint,
of
course,
but
but
we
drop
all
the
part
of
both
endpoints
being
non.
Accurate
is
yet
right,
the
sender
being
not
accurate
is
yet
and
we
we
explicitly
said
that
this
only
covers
the
case
of
the
sender.
Being
accurate
is
yen,
cable
and
we
don't
talk
and-
and
we
don't
mention
I
mean
and
how
the
behavior
is
for
an
endpoint
that
is
not
accurate,
is
yen
is
unspecified,
and
whoever
wants
to
specify
that
can
do
it.
L
But
that
would
mean
then
this
second
response
to
Michael's
point
I'm,
reasonably
okay,
with
being
a
bit
more
liberal
about
responding
to
loss.
You
know,
rather
than
treating
each
one
as
sort
of
each
loss
is
sacred
and
and
all
as
to
it.
However,
very
much
different
on
the
scene,
absolutely
no
non
response
to
loss
on
the
scene.
M
Well
again,
my
comment
is
on
the
East
End
mark
on
this
or
Anderson.
Not
the
loss.
I
mean
fault
for
the
loss.
We
all
agree
so,
but
the
key
question
is
of
why
to
care
about
an
ECCN
mark
on
the
scene,
and
my
point
is
I.
Don't
really
see
the
point:
let's
I
mean
if,
if
the,
if
the
router
is
really
congested,
she
will
drop
the
soon
and
we're
done
I.
L
K
K
F
F
K
So
this
is
a
presentation
regarding
some
measurements
that
we
have
been
doing
regarding
this
experiment,
so
this
is
accepted
to
to
appear
in
nitrile
communication
magazine
soon
so,
but
you
I
will
have
a
link
at
the
end
of
the
slides,
with
a
version
that
you
can
download
in
case
you're
interested
next
slide,
please.
So
what
we
wanted
is
to
see
if
we
can
actually
do
some
measurement
to
get
some
initial
data
for
this
experiment
and
then
in
particular
we're
looking
for
is
to
find
out
whether
Sen
market,
TCP
control,
packets,
barracks
are
treat
or
NP.
K
Then
a
Sen
mark
data,
packets,
sorry,
so,
basically,
what
we
have
done
is
we
started
a
measurement
campaign
to
see
how
easy
n
mark
or
ECT
mark
data
packets
are
treated
by
the
network
to
get
some
ground
troops
on
some
baseline
and
then
measure
how
TCP
control
packets
that
contain
ecn
marks
are
treated
by
the
network
and
compare
the
results
right.
So
that's
basically
what
we
did
next
slide.
So
in
order
to
do
this,
we
use
two
measurement
platforms.
K
A
one
is
planetlab
that
I
guess
you
all
know,
so
we
use
54
planetlab
nodes
in
25
years
and
22
countries
and
the
other
platform
that
we
use
that
you
may
not
be
so
familiar
with
is
a
platform
called
monroe
that
basically
is
a
platform
that
allows
you
to
measurements
in
mobile
networks
right.
So
this,
basically,
what
the
mantra
platform
has
is
monroe
nodes.
K
So
what
we
try,
these
all
these
type
of
packets
with
all
possible
ecn,
flag,
combinations
right,
including
both
in
the
TCP
and
IP
header,
both
for
rating
for
based
a
basic
EC,
any
CN,
+
ec
+,
+
+
+,
accurate,
is
here
so
the
tool
that
we
use
is
we
use
trace
box.
That
is
I,
not
sure
if
you're
familiar
with
this
tool.
This
is
a
tool
that
the
people
from
UCL
Leuven
has
developed.
That
is
basically
like
a
trace
route
that
increments,
that
it
L.
K
So
we
executed
this
bit
from
the
mantra
node
to
Alexa,
100k
servers
and
from
the
mantra
note
to
our
own
servers.
In
this
case,
we
actually
control
both
endpoints
and
we
can
play
with,
for
instance,
the
cynic
next
slide.
Please,
with
the
emergent
campaign
between
January
and
May
of
this
year,
we
use
to
port
80
and
443
with
the
25
million
measurement
blahblahblah,
okay
next.
So
what
have
found
so?
K
So
that
basically
means
you
send
ECT
zero
one
mark
pocket
and
what
happens
is
that
the
first
hope
of
the
mobile
providers
actually
clears
the
ACN
fields
and
turns
it
into
non
ECT
right,
because
we
found
seven
out
of
11,
which
seems
quite
a
bit.
We
find
we
tried
other
few
mobile
carriers.
We
actually
try
seven
more
and
we
actually
found
that
three
of
them
have
the
same
behavior.
Yes,
Emily.
K
K
O
K
P
Sure
sure
Apple,
if
I'm
sound
the
question
correctly
you're
asking
you're,
speculating
that
maybe
the
mobile
phone
handset
itself
is
clearing
its
own
ect
base
before
sends
the
packet.
But
if
it
succeeded
in
four
out
of
eleven
mobile
providers
that
suggests
that
the
handset
is
not
clearing
its
own
right.
K
K
Okay,
so
so
we
found
out
so
we
also
find
that
is
one
mobile
providers
only
clears
it
in
the
port
80
and
not
in
the
four
or
443.
This
I
mean
this
is
a
case
of
a
proxy
and,
as
I
said,
we
didn't
find
evidence
of
clearing
the
the
ecnv
any
the
ACN
fields
in
the
traffic,
from
the
servers
to
the
client
and
for
those
who
didn't
clear,
we
find
evidence
of
little
bit
of
clearing
0.53
percent,
which
is
somehow
similar
with
the
current
literature
on
the
fixed
networks
which
in
this
case
for
the
planet.
K
A
K
K
So
basically
that
means
that
if
you
get
clear,
you
could
get
clear
both
in
contour
pockets
and
in
data
packets.
If
you
don't
get
clear,
you
don't
get
clear
for
none
of
them,
so
basically
it
doesn't
seem
to
matter
the
the
boxes
that
do
this
doesn't
seem
to
look
whether
it
is
a
senora
control
packet
or
a
data
packets.
Basically,
everything
works
the
same
right.
K
K
So
I
tested
in
so
we
use
with
54
planetlab
nodes
that
are
fix
it
and
that's
what
we
use
so
another
result
is
61%
of
the
Alexa
top
500
K
support,
CC
n
and
only
three
point:
51
percent
support,
DC
n
plus.
However,
interestingly
enough,
none
of
them
respond
in
the
way
it
is
defined
by
RC
fifty
to
sixty
two.
Fifty
five
sixty
two
right
so
we'd
have
not
been
able
to
detect
any
server.
That
is
actually
implemented
what
it's
in
there.
K
So,
actually,
a
fifty
five
sixty
two
defines
a
very
weird
behavior
right
so
basically
says:
if
you
receive
a
mark
in
the
Indus
in
a
city
mark
in
the
sea,
not
correct
me
if
I'm
getting
this
one
and
I
think
what
you
should
do
is
start
over
again,
the
the
the
exchange
correct.
So
you
need
to
you
need
to
resend
this
that
the
scene
without
and
and
see
if
you
can
actually
get
the
scene
as
in
act
back
without
without
the
mark
right.
That's
that's
what
it
says.
F
L
But
then,
while
it
was
going
through
the
ietf,
so
there
was
an
academic
paper
written
on
that
and
all
this
expense
was
done
like
that.
Then,
when
it
went
through
the
ITF,
there
was
this
that
it
had
to
be
exactly
like
a
loss
and
and
so
they
they
tried
to
make
it
like
two
round-trip
times
worth
of,
of
delay
like
by
bouncing
the
packet
back
off
the
client
and
back
again
and-
and
it
was
all
a
bit
weird.
K
G
G
K
G
K
Okay,
next
slide,
please
so
you
here
you
have
the
I
mean
you
have
the
the
URL
to
the
paper,
but
I
will
actually
send
respondent
to
make
the
question
in
the
mailing
list.
So
final
remarks,
it
seems
forth
from
for
this.
That
Sen
plus
plus,
is
a
safe,
a
CC
n.
So
probably
that
that's
a
good
news,
there
is
more
work
to
be
done
in
order
to
get
ACN
through,
and
the
question
is
whether
it
is
relevant
that
we
found
evidence
of
EC
and
clearing.
K
Does
it
matter,
I
mean
it's
not
that
it's
dropping
the
packet,
it's
just
that
they
are
cleaning
the
thing
and
the
problem
is,
if
you
have
something
like
a
more
a
multiple
Hopf
path
behind
the
the
same
access
right
that
are
actually
using
easy
n
for
something
that
signal
will
get
lost
right.
So
any
potential
benefits
that
you
may
have
obtained
like
smaller
buffers,
I
mean
smaller
queues
or
anything
like
that
that
you
have
implemented
before
the
our
cellular
access
like,
for
instance,
you
have
our
router.
K
Q
A
In
before.
P
P
Just
remember
that
yeah
ecn
has
failed
bad
news,
frisian
not
safe
to
use,
and
if
that's
the
message
you
want
to
send,
which
is
like
forget
about
easy
and
it's
not
worth
trying,
that's
what
your
headline
says
and
I'm
concerned
when
I'm
talking
to
my
management
chain
at
Apple
right,
if
my
VP
says
yeah
I
saw
something
about
ACN
is
not
safe.
11
out
7
out
of
11
mobile
operators
drop
your
connections.
If
you
try
to
use
ezn-
and
he
won't
remember
where
he
read
it-
he
won't
know
the
facts.
He'll
just
have
internalized
this.
P
This
idea
that
the
ACN
is
bad
so
that
that's
my
feedback,
if
we
act
I,
would
call
this
title.
Fantastic
news
for
ecn
completely
safe
to
use
100%
at
the
time
causes
no
broken
connections,
and
if
those
mobile
operators
want
to
upgrade
their
networks
to
suck
less
in
the
future,
then
that's
even
better
news
right.
So
I
would
cast
this
as
good
news,
not
not
catastrophe
and
calamity
for
easy
Anna.
G
Microsoft
I
would
actually
concur
with
what
Stewart
said.
This
is
actually
good
news,
because
I'm
going
to
present
later
about
TCP,
fast,
open
and
here
the
failure
mode
is
just
removing
the
bits
right.
It's
actually
very
safe
to
deploy,
and
you
know
make
incremental
progress
on
versus
some
of
the
other
things
we
are
trying
to
do
with
TCP,
so
I
would
also
classify.
This
is
actually
very
good
news.
L
Right,
I
guess:
I'll,
respond
on
behalf
first
pose
and
say
well
think
about
that
yep
I'll
respond
on
behalf
of
us,
the
co-authors
and
say
well
think
about
those
comments.
Yeah
right,
accurate,
ECN,
there's
the
co-authors
and
we've
got
changes
in
affiliations.
Next
slide
right,
I'm
not
going
to
go
through
the
recaps.
This
is
just
in
the
slide
pack
for
those
that
are
new
here.
Maybe
so
they
can
read
these
slides
in
their
own
time.
Next
again,
a
recap
of
the
solution
which
I'm
not
going
to
go
through
next
all
right.
L
So
this
follows
on
from
the
previous
talk
and
I
asked
for
the
agenda
to
be
changed
so
that
they
would
follow
on
in
in
order
to
respond
to
the
measurements
that
we've
just
heard
about.
We
have
altered
the
accurate
ecn
draft
we've
actually
taken
stuff.
That
was
in
an
appendix
waiting
to
see
where
the
measurement
studies
would
would
find
problems
like
clearing
a
CN
and
we've
put
it
into
the
main
body
of
the
document.
Now,
because
there
are
those
opportunities,
they're,
not
problems,
right
and
or
issues
and.
L
L
We
get
full
information
on
what
the
network
has
done
to
the
IP
layer,
at
least
on
those
first
two
packets
right
now
we
could
get
that
information
later
in
the
connection
by
a
bit
of
heuristics
and
I'm
watching
what's
going
on,
but
particularly
because
it's
an
experimental
protocol.
We
thought,
let's
measure
it
accurately
and
report
those
measurements
as
part
of
the
experiment,
and
it
also
makes
the
code
easier,
because
you've
got
a
clear
indication
of
what's
going
on
rather
than
having
to
do
heuristics
right.
L
L
That
that
I
think
summarizes
everything
on
this
slide,
how
other
than
if
you,
if
you
find
that
the
IP
ACN
field
has
been
mangled.
When
you
get
the
feedback
back
of
what
you
sent,
then
you
disable
it
for
your
half
connection,
but
you
can't
there's
no
mechanism
to
tell
the
other
end.
So
it's
a
half
connection
thing.
L
So
there
are
three
ec
n
bits
used
in
the
TCP
header
and
we
now
use
four
different
combinations
of
those
to
feed
back
all
the
possible
values-
and
this
is
this-
is
during
the
three-way
handshake,
and
so
it
was
already
defined
as
being
a
code
point
based
feedback.
At
that
point,
it's
only
one
that
when
they
think
it
started
after
you've
not
got
Sydney,
it
was
one
on
the
packets,
but
you
start
just
having
a
counter
counting
the
number
see
marks.
L
So
this
is
how
we
do
this
feedback
and
and
actually
the
encoding
we
use
had
to
fit
around
other
uses
of
them
in
other
versions
of
ecn,
but
essentially
you've
got.
If
you
look
at
the
numbers
in
binary,
they
would
be
two
three
four
and
six
for
the
for
the
three
values:
err
on
the
field,
so
it's
reasonably
easy
to
do
in
the
code.
We
view
the
same
in
the
in
the
other
direction.
L
L
Right,
so
that
was
the
main
change
to
the
draft.
The
the
next
main
change
was
to
write
in
the
summary
of
the
discussion
that
we
had
at
the
last
ITF
on
change
triggered
acts.
Now
there
was
concerned
that
this
might
not
be
possible
with
offload
hardware.
What
we've
done
is
we've
changed
it
to
a
must
from
a
shoot
to
a
must
with
the
get
out
clause,
because
we
wanted
to
ensure
the
receiver
could
rely
on
this
behavior.
But,
given
its
experimental,
we
have
described
a
possible
experiment
that
people
could
do
there,
isn't
the
mast.
L
You
know
that
so
it
allows
people
who
are
having
this
problem
to
it
gives
them
a
hint
that
we're
happy
to
see
experimentation
on
this,
and
then
it
says
because
of
this
problem
with
needing
to
rely
on
the
behavior
that
it's
then
their
responsibility
to
deal
with
the
other
end
understanding
this
behavior.
You
know,
that's
it's
an
experiment,
right
fun,
but
it's
their
responsibility.
That's
the
fact
right.
L
L
You
can't
use
offload
hardware
to
do
change
trigger
tax
during
slow
start
and
then
the
gotough
off
load
off
load
once
slow
start
has
ended
right,
and
that
means
that
most
of
the
performance
of
most
of
the
server's
most
of
the
time
he's
using
offload-
and
it's
just
for
the
slow-
starts
that
you're
not
and
I'm,
told
that
there's
very
similar
code
to
that
already
in
place.
Next
right,
we
did
some
minor
edits
as
well.
L
L
We
added
that
deployment
itself
should
be
an
experimental
success
criterion.
We
added
that
you
don't
use
the
congestion
window
reduced
signal,
so
it
just
in
case
anyone
thought
you
still
had
to
use
that
and
we
also
made
sure
that
we
defined
the
behaviors
of
all
unused
values
so
that
it
would
be
clear
in
in
the
future.
What
the
behavior
would
be
for
forward
compatibility
Michael,
yes,.
M
So
this
is
again
Michael
speaking
from
the
floor.
So
thanks
a
lot
for
addressing
my
comment
on
the
experiment.
Success
criteria,
unfortunately,
I
don't
like
the
new
grading
leader,
so
I
think
I
owe
you
something
on
the
mailing
list.
You
still
have
wording
in
the
document
that
refers
to
the
TCP
I'm
working,
poop
and
I.
Think
this
doesn't
address
my
comment
from
the
last
working
group,
but
instead
of
saying
it
on
the
mic,
I
probably
have
to
write
it
and
or
maybe
I
have
to
produce
a
maternity
wear
at
their
place
yeah.
M
So,
regarding
the
the
last
point,
one
very
naive
question
so
I
mean
this
is
an
experiment
so
assume
that
we
move
this
to
standards
track
at
some
point
in
time
and
assume
that
we
have
to
change
something.
How
would
the
negotiate
the
standards
track
version
right?
I
think
this
is
something
I
mean
you
don't
have
to
necessarily
document
this
in
an
experimental
thing,
but
it's
something
that,
in
my
opinion,
would
variant
thinking.
So
how
would
the
standards
track
version
of
this
be
negotiated
right?
What
one
one.
L
M
As
I
said
in
general,
I
think
it's
require
thinking,
I
I
haven't
thought
about
it,
and
but
I
actually
would
look
for
an
abstract
version
of
this
at
some
point
in
time
and
this
why?
It
is
this:
why
even
in
the
experimental
design,
you
should
foresee
that,
in
my
opinion,
how
the
upgrade
parts
to
standard
strike
could
look
like
under
the
possible
assumption
that
the
protocol
has
to
change
so.
C
Me
I
think:
that's
not
a
thing
that
is
any
specific
to
accurate.
Ecn
like
there
are
a
lot
of
cases
where
you
have
the
experimental
thing
and
then
some
things
you
can't
change
anymore
or
like
it
wouldn't
work
anymore.
So,
like
the
things
that
we
just
described
in
the
experiment,
are
things
that
we
can
just
change
without
no
gate
negotiation,
for
example,
not
using
change
triggered
X.
You
just
change
your
fermentation.
So
that's
the
questions
we
have
I,
don't
think
that
there's
actually
an
issue
here.
M
Well,
it's
something
on
the
wire
changes.
Some
encoding
of
the
bits,
for
example,
I
mean
we
don't
know
how
the
standard
strict
would
look
like
it's
an
experiment
all
right.
So
that's
why,
as
I
said,
I
think
the
I
don't
understand
it.
I've
not
looked
at
it.
I
don't
have
a
solution,
but,
as
I
said,
I
mean
at
the
end
of
the
day,
I
mean
this
consumes
a
bit
in
the
header
which
actually
should
happen.
One
standard
strike
at
the
end
of
the
day,
so
we
have
to
think
about
the
upgrade
past.
M
L
Mean
the
response
I
gave,
you
works
fine,
it's
like
a
version
number.
The
only
problem
is,
it
relies
on
the
option
being
there
and
if
you
know
the
standard
strike
version
doesn't
have
the
option
and
it
only
uses
the
three
bits
in
the
header
or
the
middle
box
and
strip
them
or
whatever.
We
can
do
very
little
with
three
bits
because
we've
with
work,
you
know
we've
used
as
much
as
we
can
out
of
those
three
bits
and
then
we
can't
be
much
more.
You
know
yeah
yeah,.
L
Okay,
next
I
think
we're
done
on
the
next
slide
is
so,
yes,
it's
been
implemented
in
Linux.
It's
all.
The
open
issues
that
were
written
into
the
appendices
are
now
closed
with
delete
people
to
appendices.
So
we
believe
we're
ready
for
luck.
Working
during
law,
school
I,
don't
know
whether
you
want
your
edits
done
during
RFC.
You
know
a
later
stage
or
whether
they
want
them
done
before
the
working
group
law
school,
but
we'll
certainly
willing
to
do
them.
I
A
C
G
L
G
My
understanding
is
that
full
TCP
offload
has
not
really
been
widely
deployed.
If
my
understand
is
correct,
the
other
point
I
would
make
is
this
is
some
of
these
issues
are
relevant
to
LRO
as
well,
and
it
would
be
nice
to
call
that
out
in
the
draft,
because
at
least
when
Windows
does
LRO,
we
specifically
required
that
when
the
core
point
changes,
you
don't
actually
group
those
package
together.
So.
L
A
Okay,
yeah,
probably
thank
for
or
uncaring
reviewing
thanks,
so
much
video,
you
want
to
say
something
yeah.
C
The
offloading
case,
it's
mainly
egg
aggregation
where
the
Hardware
decides
to
not
send
out
all
the
eggs
which
was
concerned,
and
that's
like
that's
like
specialized
hardware,
that
is
used
in,
like
whatever
starch
networks
and
stuff,
not
sure.
If
that's
real
concerned
for
like
a
general
purpose,
implementation.
A
R
Liza
go:
this
is
not
on
this
draft,
but
it's
more
than
a
general
topic
of
EC,
and
so
we
do
quick,
write
and
Ingemar
is
actually
bleeding
a
group
of
people
to
think
about
how
quick
can
benefit
from
easy
and
information
and
I
actually
carry
horford
quick
than
TCP
these
days.
So
I
think
if
the
EC
and
people
wanted
to
make
an
impact
and
wanted
to
make
an
impact
quickly.
You
should
look
at
quick
because
it's
going
to
push
a
lot
of
bytes
in
a
very
short
amount
of
time.
A
R
For
those
of
you
who
already
involve
you
already
know
about
the
dimensioning
for
the
people
might
not
be
involved
in
it
yet
and
might
wanna
get
involved,
because
I
think
ping
Amar's
team
could
probably
use
some
some
help.
So
quick
is
incredibly
pressed
for
time.
That's
why
we
break
off
all
of
these
things
into
little.
R
You
know
design
groups
that
can
can
work
out
a
proposal
for
a
group
and-
and
if,
if
you
want
this
to
go
in
to
be
one
of
quick
which
will
hopefully
ship
sometime
next
year,
it
needs
to
be
ready
sometime
next
year,
which
is
not
necessarily
a
timeframe.
That
is
very,
you
know
normal
for
the
TCP
folks.
G
R
If
nothing
is
ready,
right
is
it's,
it's
certainly,
you
know,
did
there's
no
point
in
having
a
discussion.
If
somebody
had
a
proposal-
and
there
was
some
evidence
that
it
might
be
useful,
we
can
certainly
talk
about
it,
but
without
without
anything
to
discuss
it
won't
be.
If
there
is
something
to
discuss,
it
might
be.
L
Just
to
give
some
specifics
if
those
people
that
are
interested
in
what
last
has
just
said
about
adding
ACN
to
quick,
there's
a
forming
a
design
team
meeting
at
four
o'clock
this
afternoon,
I
believe
in
some
room,
beginning
with
H,
and
if
I
can't
remember
how
I
don't
know.
But
anyway,
if
you,
if
you
look
at
the
quick
mining
list
and
look
for
any
Mars,
my
little
tell
you.
A
S
Okay,
hi
everybody
I'm
Neutron
to
present
Rock
update,
and
this
is
work
with
my
colleague,
Neil
car,
well
Nandita
and
we
are
at
Google
next
slide
peace.
S
So
just
a
quick
recap
of
what
rock
is
so
rocky,
essentially
is
a
time-based
detector
in
our
packet
loss
compared
to
the
ode
to
batch
ratio.
We
all
know
about,
and
the
best
way
to
describe
rock
is
that
conceptually
think
of
that
every
same
packet
has
a
timer
has
a
timeout,
and
this
timeout
is
constantly
adjusted
by
all
the
recent
RTT
simply
collected
from
Mac
either.
S
Please
and
the
other
thing
is
tail
loss
probe,
because
we
rack,
you
still
need
some
ACK
in
order
to
trigger
the
recovery,
but
sometimes
you
don't
get
yak
and
the
idea
of
tear
loss
probe
is
well.
If
you
wait
a
little
bit
and
you
don't
see
instead
of
this
long
conservative
timeout,
instead
of
just
you
know,
we
use
window
to
1
and
then
declare
everything
is
lost.
You
send
a
scout
pro
packet
out
after
an
RTP
or
two
to
trigger
some
information,
and
that
information
will
tell
you
either.
S
Okay,
the
pro
packet
has
been
delivered
and,
along
with
other
packets
or
the
packet,
has
been
lost.
Of
course,
there
is
always
a
possibility.
The
pro
packet
is
also
lost.
Then
then,
you
wait
for
this
for
long
time
out
and
then
you
declare
everything
is
gone
like
the
whole
in
flies.
Come
even
the
scout
have
said
this
get
killed
and
the
whole
idea
is
you
have
this
quick,
timeout
just
and
a
little
bit
of
traffic
to
see
what's
going
on,
so
you
can
react
quickly
and
that's
the
essential
idea
of
GOP
next
slide.
S
S
Another
one
is
also
quick,
but
currently
they
all
use
rock
as
a
sort
of
additional
mechanism
to
the
classic
Cubase
racial
and
I'll
talk
about
how
we
plan
to
retire
to
backs
ratio
if
we
can,
and
but
before
that
I'm
gonna
talk
about
the
major
changes,
IDF
nightly,
how
we
optimize
pass,
who
is
very
large,
be
DP
and
also
frequent
reorderings,
and
we
found
some.
You
know,
issues
with
middle
boxes,
not
setting
the
sequence.
What
next
slide
please
so
first
thing
we
found
the
large
PDP
packs.
S
So
this
is
a
snapshot
grab
on
taking
from
google
cloud
transfer.
This
is
our
TT
second
and
I'll
attend
beach,
EVPs
link
and
this
cloud
user
is
using
cubic
in
that
case,
and
you
can
see
that
as
it's
slow,
starting
getting
faster
and
faster
that
you
start
seeing
some
losses
actually
quite
a
few
losses.
So
the
wine
line
here
is
the
data
line.
S
S
Even
this
congestion
control
working
very
properly,
which
has
a
c1
relative,
we
close
to
the
PDP
and
but
the
Linux
back
and
wrap
processing
engine
is
essentially
skinning
a
link
list
with
some
hint
pointers
which
basically,
he
assumes
that
okay,
the
next
tag
is
usually
somewhere
close
to
what
my
previous
tack
is
in
terms
of
secret
space,
but
unhappy
losses,
and
especially
when
you
have
second-order
losses.
That
means
you're,
retransmitted,
get
lost
again,
then,
is
having
issues,
because
he
has
to
reset
this
hip
pointers
or
the
hint
point
or
don't
work
that
well
anyway.
S
S
This
humongous
queue,
even
with
a
lot
of
optimization,
going
in
and
is
simply
not
able
to
retransmit
fast
enough
and
most
of
the
work
he
spent
is
completely
kind
of
duplicated
because
it's
like
okay
I
have
to
scale
all
this,
since
you
know
large
queue.
Thank
you,
but
all
I
found
is.
Oh,
there
is
a
one
new
packet
that
needs
to
be
retransmitted,
so
it's
really
that
bad,
and
essentially
this
is
not
a
protocol
issue-
is
simply
an
implementation
issue,
but
this
doesn't
really
happen
in
most
of
our
things.
S
S
The
word
processing
needs
something
more
than
an
R
because
for
rack,
oh
here
it
is
about
the
time
you
either
you
sit
attack
it
or
we
transmit
a
packet,
it's
all
about
when
when
what's
the
last
time,
so
you
need
to
build
this.
That's
sorted
by
the
good
news
is
that
is
the
constant
in
this
list.
When.
S
S
S
We
transmit
the
only
difference
is
that
all
these
retransmits
arsonists
transmit?
These
are
not
real,
so
this
are
just
who
is
recoveries,
and
the
reason
for
that
is
that
the
current
version
of
references
static.
We
all
know
which
are
TTP,
and
in
this
case
the
wielding
window
is
just
too
small
right.
So
you
get
reloading
outside
of
us
who.
B
S
So
even
the
Linux
yes
implemented
this
because
undo
I
felt
that
you
can
detect
buoys
recovery.
It
can
really
help
in
this
case,
because
sure
you
know
Africa
we
I
just
think
was
false,
though
I
we
good.
But
what
happened
is
after
you
neighbor
this.
Would
the
very
next
act
cost
me
again,
though,
is
they're
probing
the
Penguins
they're
making
forward
progress.
All
you
do
is
equating
that
is
you
know
you
hit
the
brake,
you
lose
the
brake,
then
you
hit
the
brake
pedal
again
and
never
able
to
make
up
it.
S
S
Now
the
question
is:
how
do
we
know
the
reordan
window
is
too
small
or
too
big
and
survey
I
mean
we
could
have
used
like
a
time
stamp.
That
will
help
us
detect
food
recovery,
but
we'd
use
these
act,
because
this
egg
is
supported
by
Linux,
Mac
and
also
Windows,
and
it
has
been
supported
for
a
very
long
time
for
all
these
three
operating
systems,
and
so
that's
good
time
is.
You
know
most
of
the
receivers
that
support
this.
We
can
have
property
back.
So
what?
What
is
these?
S
S
Corner
of
the
Minotti,
then,
for
every
run
that
we
we
see
some
defects.
That
means
okay,
you
have
we
transmit
something.
Then
we
just
sort
of
progressively
increment
that
until
who
SRT-
and
we
don't
want
to
make
a
wielding
window
too
large,
because
the
rack
is
really
designed
to
do-
is
sort
of
reorderings
an
RTT.
S
You
always
want
to
how
do
we
discover
that
and
now
that
we
Alden
widow
is
too
large
right
and
the
simple
this
way
we
can
sink
off
is
okay.
After
some
a
high
volume
60s
is
that
free
recovers?
Then
we
initialize
the
reordering
window,
and
then
you
try,
okay
and
we
think
that's
sort
of
acceptable
and
really
the
intention
is
to
cannot
keep
make
the
algorithm
too
complicated
and
just-
and
here
is
the
exact
same
dish.
You
know
this
new
adaptive.
We
are
the
window
algorithm
and
you
can
see
that.
S
S
S
S
S
L
S
S
S
A
N
B
S
S
Yeah,
so
in
the
in
it's
not
Rendon
lumber,
so
essentially
these
middle
boxes
is
applying
an
offset
that
he
kept
or
whatever
we
say
to
the
sequences
yeah,
but
he
doesn't
do
that
for
the
sack.
So
you
can't
see
the
sack
sequence
here,
because
it's
all
just
you
know
offset
by
some
random
number-
that
middle
boxes
do
next
slide.
Please
well.
S
So
next
step
I
think
our
vision
is
really
make
TCP
resilient
to
reordering
certain
degree
of
reordering
and
trying
to
keep
the
last
recovery
simple
by
using
the
time
information.
I
think
this
enables
some
ideas
like
flow
LEDs
or
fast
forwarding,
like
you,
don't
need
to
keep
reordering
buffer
that
much
in
the
in
the
routers
and
our
really.
The
next
working
item
is
to
say
Kimbrough
EOP,
the
stand-alone
recovery
mechanism
and
without
using
this
to
back
threshold.
S
Fast
recovery
starts
and
you
are
saying
that
okay,
we
have
to
wait
a
corner
of
an
RTT
so,
but
even
if
we
have
to
use
the
do
pass
ratio,
it's
as
we
trivial
to
implement
in
rack,
because
all
you
do
is
you
introduce
a
conditional
test
to
say
if
I
receive
through
backs
that
the
wielding
window
to
zero
bouncing?
But
this
is
something
we
are
experimenting
and
we
hope
findings
in
will
have
more
data
about
it
and
for
the
next
thing,
I
really
old,
the
community
is
to
update
the
expired
draft.
S
G
Praveen
Microsoft,
so
you
said
that
the
reordering
problem
was
theoretical,
I'm
I,
wonder
if
you
have
any
data
showing
watch
the
you
know
highest
reordering
degree.
You
have
seen
in
practice.
I
understand
the
need
to
make
it
more
resilient,
but
my
understanding
is
that
we
go
out
of
our
way
to
make
sure
that
TCP
packets
don't
get
reordered
in
the
network
using
ecmp
and
making
sure
you
know
using
RSS
that
we
process
all
the
packets
on
one
core.
So
we
actually
try
very
hard
to
minimize
reordering.
S
Great
question
so
on
all
the
tests
on
all
the
connection
that
in
looking
it
seems
a
1/4
of
RTD
is
released
sufficient
and
even
the
the
key
is
really
they
say
for
most
of
the
cases,
the
reordering
is
caused
by
narrow
forwarding,
and
these
are
like
in
the
links
are
very
similar.
A
delays
come,
but
we
start
seeing
a
problem
if
people
are
trying
crazy
idea
like
a
packet
spray.
Okay
I
want
to
spray
across
all
the
27
links.
S
G
Okay,
thank
you.
The
other
question
was
so
replacing
rack
rack
TLV
as
the
only
loss
recovery
mechanism.
What
are
your
you
know?
Is
there
any
data
showing
how
it
works
on
really
low
latency
links,
like
you
know,
tens
of
microseconds,
so
yeah
I
mean
the
problem
here.
Is
that
fine
grained
timers
are
a
problem,
so
I
was
wondering
how
we
would
not
have
you
would
completely
get
rid
of.
You
know
dope
yeah.
S
I
think
there's
also
a
question
I
think
here
in
Linux
particular
the
Google
version
of
the
Linux.
We
have
some
of
the
last
story
of
high
resolution,
timer
microsecond
timers,
so
that's
fairly
sufficient
for
a
data
center,
even
you're
talking
about
10
micro.
Second
audio,
the
real
challenge
will
come
from
first,
the
operating
system
cannot
support
this
find
current
timers,
they
say
is
still
using
millisecond
time
stamps
then
redo
back.
This
has
its
advantage,
so,
but
I
think
in
also
the
day.
Arsenal
cases
with
assuming
powerful
servers.
S
Rack
is,
in
my
Sarah,
in
my
mind,
probably
is
just
as
fine
sa
the
RTD
is
10
micro.
Second,
this
code
go
to
the
extreme
in
microsecond.
Are
fast
retransmit,
we'll
start
about.
You
know
with
current
setting
two
point:
five
millisecond
two
point,
five
microsecond
and
it's
all
triggered
by
acts
right,
so
we
don't
really
require
a
high
resolution.
Timer
in
that
case
and
I
would
consider
that's
almost
as
good
as
most
of
the
facts
ratio
which
will
trigger
probably
at
zero
microsecond,
but
the
case
of
that.
It's
more
concerning
on
much
longer
altitude
links.
S
Let's
say
one
second
RTT
not
that
uncommon
right,
then
rack
would
trigger
after
250
milliseconds,
while
Dewback
might
take
three
millisecond
to
trigger,
so
none
of
those
cases
Rock
could
indeed
be
slow,
and
then
we
might
have
to
change
with
how
the
wielding
window,
it's
all
about
how
you
set
the
wheel
right
and
I,
don't
see.
This
is
like
rockin
really
advanced,
just
be
smarter
about
the
reordering
window.
I
think
we
can.
We
can
deliver
that,
and
the
obvious
and
change
is
simple:
it's
all
about
just
changing
the
wielding
window.
G
G
S
Yeah
so
essentially
on
google.com
a
lot
of
transactions,
or
you
can
say
HTTP
response
is
really
just
one
packet.
There
is
not
much
you
can
do
with
one
packet
in
rack.
We
will
send
yo
P
after
the
TLP
timeout,
but
it
still
again
it's
a
timeout
and
that's
why
you
can
say
basically
it's
probably
70%
of
the
HTTP
response
are
just
or
this
single
packet
stuff
yeah
and
that's
really
the
reason.
Yeah.
T
Boy
Advent
I
have
also
seen
similar,
similar
numbers
from
other
other
server
territories,
meap's
from
which
you
can
see
see
see
what
what
is
the
actual
reason
why
the
Linux
recovery
was
any
little
stack,
was
it
when
initiating
the
recovery,
and,
it's
also
also
seems
seems
to
occur
on
other
people's
servers,
so
so
this
is
quite
typical
number.
Actually,
this
roughly
70%
of
the
recovers
first
indication
about
about
that.
There
is
some
problem
missed
an
RTO
timer,
expiring.
S
T
It's
not
just
that
problem,
it's
just
that
you,
you
lose
lose,
or
they
telltale
of
your
window.
So
so,
and
there
is
nothing
new
to
transmit.
So
so,
basically,
when
you,
when
you,
when
you
don't
have
anything
anything
more
to
send
you
and
the
polarize
into
this
condition
that
if
there,
if
the
tail
is
tail,
is
lost,
lost
and
it
seems
to
be,
this
70
percent
is
quite
quite
close
to
the
number
that
typical
servers
get
good.
S
Yes,
I
agree:
I
have
to
admit
that
I
think
that's
why
I
literally
copied
it
from
the
people
I,
which
is
copy
from
people
slides
and
that
lumber
may
be
outdated.
It
could
be
happening.
This
could
be
a
lumber
I
coded
for
HTTP
to
was
actually
coid
very
largely
though,
and
vb2
does
change
that
lumber
I'd
lower
that
lumber
quite
a
bit
because
he
tend
to
use
his
one
connection
and,
of
course,
the
per
connection
volume,
and
it
does
all
this
pipeline
suck
right.
So
it's
a
lot
less.
O
He
might
want
some
as
regards
to
this
oddity,
divided
Bob
for
sort
of
a
reordering
window
and
sort
of
a.
If
you
look
at
the
foggy
technology
with
the
dual
connectivity
sort
of
pinch
functions
where
you
can
switch
to
packet
transport
between
a
millimeter
wave
and
another
frequency
bands,
and
then
you
can,
you
can
expect
to
I
will
get
the
reordering
depth
larger
than
a
quarter
or
TT
I.
Don't
tell
if.
B
O
S
So
I
think
depends
on
the
network.
It's
a
little
bit
like
iw
said
it's
definitely
one
size
fits
all,
but
you
also
have
to
pick
and
I
hope
that
it
will
be
experimenting
with
some
other
value.
But
you
know
about
an
eighth
or
so,
and
I
was
thinking
even
stuff
like
ok,
you
could
have
a
per
route,
we
open
window
to
say.
Okay,
if
on
this
particular
guatrau,
you
are
really
sure
I
want
to
accommodate
her.
We
ordering
more
aggressive
about
weeks.
I
said
I
could
have
come
bigger.
It.
F
O
F
B
E
K
So,
as
of
today,
the
maximum
window
that
the
Center
can
use
to
send
data
is
determined
by
the
receiver
window
field
in
the
TCP
main
header
and
the
window
scale
option.
Actually
the
maximum
window
is
reach
when
the
window
scale
option
is
set
to
the
maximum
allowed,
which
is
14,
which
results
in
roughly
a
gigabyte
of
data
in
the
in
the
sender
window
and
that
essentially
imposes
an
upper
bound
to
the
maximum
speed
that
TCP
can
send
right.
K
So
a
simple
example
here,
if
you
are
round
to
town
of
10
milliseconds
and
a
maximum,
the
maximum
speed
that
you
have
available
will
be
something
like
8080
here,
a
bit
and
the
rise
technology
with
a
hundred
gig
already,
of
course,
we're
talking
about
a
single
connection,
but
nevertheless
we
are
in
the
orders
of
reaching
that
limit
next
slide.
So
doubling
the
maximum
window
possible
is
is
easy
right,
because
actually,
today,
there
is
an
unnecessarily
restriction
of
having
the
maximum
window
scale
value
of
14.
K
The
original
motivation
for
that
is,
they
wanted
to
be
able
to
distinguish
the
old
and
the
new
packet
so
that
basically
allowed
a
maximum
window
of
a
fourth
of
the
total
win
sequel,
TCP
sequence
number
space:
well,
you!
Actually,
what
you
need
is
only
to
detect
whether
it
is
the
packet
is
in
the
window
or
apps
out
the
window.
So
basically,
you
could
actually
easily
increase
to
a
double
of
this
isoh.
Allowing
a
value
of
15
in
the
window.
K
Scale
option
would
be,
would
be
okay
right
and
that
will
allow
you
to
essentially
double
the
value
that
you
have
so
far.
So
we
proposed
this
I
mean
a
few
a
year
ago,
or
something
like
that
right
and
the
feedback
basically
was
well.
If
you,
if
you,
if
you,
if
you
consider
this,
is
a
problem,
actually
doubling
will
actually
fix,
will
buy
you
a
bit
of
time,
but
not
really
much.
So
you
really
should
start
looking
into
how
to
enable
larger
increase
of
this
right
and
that's
basically
what
we
did
right.
K
So
we
we
went
to
and
take
a
look
about
how
to
do
to
actually
increase
mark
more
than
this,
the
the
tcp
maximum
connection
a
window-
and
this
is
then
this-
is
this
draft
right
next
slide?
So
if
you
want
to
go
beyond
two
to
a
power
of
31,
that
implies
that
you
need
to
increase
the
TCP
sequence
number
space
right
and
because
you
need
to
be
able
to
to
increase,
to
identify
a
larger
number
of
packets
that
that
will
be
in
flight.
Okay.
K
This
will
be
much
more
painful
than
than
the
previous
approach
right,
so
this
will
require
additional
changes
next
slide,
please!
So,
ideally,
basically
what
I
mean
the
idea
for
this
would
be
essentially
to
include
in
to
include
a
prefix
of
the
sequence
number
and
lock
number
somehow
in
the
TCP
packet
right.
There
are
essentially
two
ways
you
can
think
about
this
or
either
you
define
a
new
option,
and
that
is
what
the
ref
loonie
is
doing,
or
to
repurpose.
Somehow
an
existing
option,
flux.
R
K
R
K
K
K
P
P
K
Okay,
no
go
back,
please
so
no
next
slide.
So
there
is
a
slide
in
the
middle
that
you
need
to
stop
in
there.
So
go
back
one!
Yes,
yes,
so,
as
I
was
saying
is
we
need
to
we
need
to?
If
we
want
to
add
this,
extend
the
sequence
number,
we
need
to
be
able
to
carry
more
bit,
so
the
sequence
number
and
the
way
of
doing
this
is
either
you
define
a
new
option.
K
That
is
what
treflan
is
saying
or
you
somehow
repurpose
an
existing
one,
and
we
thought
it
would
be
interesting
to
explore
whether
you
can
repurpose
the
timescale
and
and
the
timescale
or
the
and
the
and
the
timestamp
option.
Sorry,
it's
a
type
of
er
for
this
okay
next
life,
so
why
we
think
that
the
time
stamp
option
is
a
is
is
a
is
a
good
is
a
good
candidate
for
this.
So
the
there
are
two
defined
use
cases
for
the
time
stamp.
One
is
our
TT
measurements
and
the
other
one
is
pause.
K
If
you
actually
have
a
larger
sequence
number
space,
you
don't
really
need
pause
right.
So
actually,
one
of
the
uses
of
the
time
stamp
option
is
already
subsumed
into
this.
This
new
form
right.
So
the
only
thing
that
is
left
is
you
somehow
need
to
accommodate
the
other
define
use.
That
is
our
that
a
measurement
yes
Michael.
M
K
D
K
K
One
argument
for
okay,
so
I.
Let
me
let
me
state
this,
and
then
we
discuss
what
you
just
said.
So
one
argument
for
the
for
using
the
the
repurposing,
the
timestamp
the
timestamp
option,
for
this
is
because
you
have
limited
number
of
space
in
the
in
the
scene
right
so
here
I
have
a
short
list
of
different
options
that
usually
go
in
the
scene.
There
are,
there
are
considered
I,
think
useful,
and
the
result
is
that
you
almost
use
most
of
the
of
the
of
the
available
space.
K
If
you
want
to
additionally
add
a
new
option
that
carries
that
carries
the
the
the
prefix
sequence
number,
you
probably
don't
have
enough
in
order
to
carry
all
these
existing
options
and
in
the
new
and
the
new
option.
So
what
you're
saying
is
that
you?
You
could
actually
substitute
the
the
time
step
option
by
the
new
option
and
you
don't
really
need
that
maybe
possible
I
guess.
K
The
problem
is
that
you
actually
do
need
that
I
mean
the
reason
why
you
actually
do
need
the
time
step
option.
It
is
likely
that
you
need
the
time
step
function
in
this
type
of
connections,
because
these
type
of
combinations
are
likely
to
send
a
lot
of
data,
so
you
probably
need
at
least
pause
if
you
are
unable
to
get
the
if
you're
unable
to
get
the
extended
sequence
number
right.
K
That's
the
reason
why
we
think
that
if
you
actually
can
embed
both
meanings
in
the
same
in
the
same
in
the
same
option,
you
achieve
somehow
graceful
fallback,
because
you
either
end
up
using
either
post
or
the
extended
sequence
number.
That
kind
of
both
of
them
solve
the
same
I
mean
are
in
the
same
direction.
Right.
I
K
Don't
know
if
that
makes
sense,
then
the
other
argument
for
using
an
existing
option
is
that
there
are
some
measurement
that
have
shown
that
unknown
options
are
more
likely
to
experience
problem
than
existing
options.
Alright,
so
actually
reusing
an
existing
option.
You
benefit
from
the
fact
that
current
middleboxes
kind
of
unlikely
to
be
more
gentle
with
them
next
slide.
K
So
the
main
problem
that
the
time
step
option
has
is
that
it's
basically
an
opaque,
a
sting
of
off
bits
that
don't
have
any
kinds
of
the
flags
or
bits
reserved.
So
you
need
to
be
able
to
signal
that
there
is
some
other
place.
I
mean
that
there
is
some
other
meaning
in
the
timestamp
option,
and
what
we
are
proposing
here
is
to
use
the
available
bits
that
are
left
in
the
window.
K
Scale
option
to
signal
that
the
time
stop
option
will
actually
carry
additional
I
mean
will
be
carrying
the
the
sequence
number
prefix,
rather
than
the
timestamp.
The
reason
why
these
kind
of
make
sense
is
because
you,
if
you
want
to
benefit
from
the
extended
a
sequence
number,
you
actually
need
to
also
include
a
larger,
a
larger
window
scale,
and
this
basically
move
would
achieve
both
purposes.
Okay,
next
slide,
please!
So
the
if
you
repurpose
the
time
stop
function
to
actually
carry
the
the
the
prefix
sequence
number,
the
sequence
number
prefix.
K
The
problem
is:
how
do
you
manage
to
still
use
it
for
a
round-trip
time,
measurements
and
I
here
running
in
the
draft?
I'm,
not
sure
it
worth
going
through
all
of
them,
but
in
the
draft
there
are
several
options
in
ways
that
you
can
actually
repurpose
I
mean
you
can
improve
both
the
prefix,
the
sequence
number
prefix
M,
sometimes
sporadically
timestamp
information,
yes
Mike!
This.
M
M
K
B
M
K
M
A
K
Okay,
so,
as
I
said,
I
have
in
the
draft,
there
are
several
ways
you
can
actually
still
encode
RTT
a
time
some
information
in
order
to
still
still
measuring
art,
the
rtt
I,
don't
think
it's
worthwhile
going
through
all
of
them
just
move
on
so
next
slide,
so
our
next
slide.
Actually,
so
in
particular,
there
is
one
that
would
allow
I
mean
there
is
essentially
two
options
here:
either
you
encode
from
time
to
time.
K
Rtt
information
I
mean
times
time
to
measure
the
RTT
information
in
the
options
or
either
you
so
that
basically
requires
for
you
to
reserve
some
beats
as
Flags
to
indicate
whether
you
have
prefix
info
information,
I,
mean
sequence,
summary
information
or
time
some
information
in
the
option,
or
you
actually
include
both
of
them
all
the
time
right,
in
which
case
you
basically
have
less
bits.
So
this
is
the
option
where
you
in
to
you.
You
can
call
both
of
them
all
the
time
right.
K
So
here
you
will
not,
with
16
bits
for
extended
prefix
sequence
for
extended
sequence,
number
and
15
bits
for
having
a
time
stance
right.
I
mean
there
is
a
draft
by
by
Brian
and
media
that
actually
allows
some
variable
precision
encoding
of
timestamps.
That
would
basically
tell
you,
allow
you
to
encode
original
timestamps
within
16
bits
right,
so
these
are
kind
of
options
that
we
have
next
slide.
So
I
guess
the
first
question
here
is:
do
we
actually
need
to
increase
the
maximum
window
available
right
so
that
I
think
that's
the
best?
K
The
first
question
for
the
working
group
right.
The
next
question
is:
do
we
need
to
increase
it,
the
double
or
more,
to
double
right,
and
if
the
answers
to
both
of
these
questions
are
yes,
the
question
is
whether
it
is
this
is
the
right
approach
or
not,
but
I
guess
we're
still
need
to
figure
out
whether
we
need
to
increase
the
receive
win.
I
mean
the
the
window.
The
maximum
will
window
that
it's
that
is
achievable
in
a
TCP
connection.
K
G
Praveen
Microsoft
today
the
receive
window
I
think
by
default
we
only
grew
up
to
like
16
Meg
and
we
haven't
had
like
any
real
customer
problems
resulting
from
that
so
far.
So
one
of
the
options
here
would
be
to
just
run
multiple
connections
when
you
it's
such
a
limit
in
terms
of
the
approach
I'm,
not
really
sure
if
this
is
safely
deployable
like
we
have
to
worry
about
sack-
and
we
have
to
worry
about
like
Newton's
presentation
like
middleboxes,
rewriting
sequence
numbers,
so
anything
that
tries
to
modify
and
increase
the
TCP
sequence.
G
K
Am
happy
to
try
to
conduct
a
measurement
campaign
to
try
to
assess
some
of
these
problems,
I'm
happy
to
do
that.
There
are
some
things
that
won't
be
able
to
measure
like
what
Michael
was
saying:
I
mean
I,
I,
really
don't
know.
If
a
monitoring
system
will
break
I,
guess
what
what
he
saying
makes
sense
I,
don't
know
how
widely
deployed
these
are
so,
but
but
I
suddenly
can
measure.
Mccain
campaign
can
actually
find
out
how
much
rewriting
off
of
sequence,
numbers
and
time
stamps
are
out
there.
G
R
K
R
R
F
I'm
going
here,
I
wonder
if
quick
already
has
the
problem,
because
it
presumably
messes
up
the
measuring
infrastructure.
We
just
talked
about
you
can't
measure
it
so
I
mean
if
the
thing
we're
worried
about
here
is
lack
of
measuring
it.
No.
R
But
not
right
that
the
thing
is
worried
about
is
that
that
your
window
limited
and
your
performance
is
therefore
limited
that
problem
we
don't
have
in
quick
whether
the
measurement
infrastructure
is
messed
up.
I
would
assume
that
for
UDP
they
don't
even
try
to
look
for
time
times.
Things
like
so
yeah
and
and
that's
a
known
problem
with
quick,
but
but
whether
or
not
they
can
measure
does
not
affect
the
throughput
you're
getting.
M
So
there's
a
snake,
oh
speaking,
from
the
individual,
fewer
fewer,
and
so
our
charter
says
that
we
should
maintain
his
EP
usability
and
utility,
and
that
also
means
that
we
have
to
think
about
certain
things,
even
if
an
other
protocol
that
it's
not
in
scope
of
this
year
could
meet
it
as
well.
I.
Think
the
technical
question
is
this
matters
really
at
high
speed
at
probably
4,100
peak
and
that
order
of
magnitude
and
the
the
key
question
in
my
opinion
bill.
M
If
that's,
if
a
dead
speed,
for
example,
other
protocols
will
really
work
because
for
that
you
need
heavy
hardware
offloading,
typically
and
I.
Think
one
of
the
interesting
aspects
for
something
in
that
space
is
how
it
interacts
with
offloading
and
if
the
offloading
forced
some
tcp
extension
is
easier
to
get
than
the
offloading
for
some
other
protocol.
That
I
don't
understand
about
and
I.
Don't
know
that,
but
I
think
the
e
and
at
the
end
of
the
day,
the
offloading-
and
they
question
which
is
easier
to
implement
in
offloading,
seems
to
me
an
important
question.
M
I
G
Would
like
to
add
the
comment
about
quick,
so
quick
does
have
a
little
bit
of
a
limited
reach
ability
when
compared
to
TCP.
So
in
a
general
statement
that
you
know,
we
don't
need
to
improve
TCP.
It's
probably
not
okay.
If
this
particular
problem,
I
think
is,
is
not
a
real
problem,
because
you
can
always.
G
C
C
J
J
Like
could
we
do
this
with
multipath
TCP
and
essentially
striping
data
across
connections
given
they've
already
essentially
about
64
bit
sequence
numbers
at
the
data
sequence
layer?
That's
another
possibility.
A
A
G
G
Is
this
better?
So
this
is
a
series
of
talks
I've
been
doing
on
to
inform
the
community
on
what's
happening
with
Windows
TCP
next
slide,
please.
So
this
is
a
quick
recap
so
from
the
Chicago
IETF,
where
we
basically,
we
just
had
enabled
TCP
fast
open
by
default
in
the
edge
browser
in
the
creators
update
of
Windows
10,
we
had
experimental
support
for
Kubik
rack
in
tier,
be
enabled
for
connections
which
have
greater
than
10
millisecond
initial
handshake,
RTT
and
let
bat
plus
plus,
was
being
used
for
internal
workloads.
G
In
case
you
missed
ICC
RG
I
would
recommend
that
you
catch
upon
the
light.
Pat
left
us
details,
so
newer
deployment
updates
so
fall.
Creator's
update
is
rolling
out
right
now
and
service
2016
has
an
update
as
well,
which
is
available
for
download,
and
these
are
the
updates
that
I'm
going
to
be
talking
about
today,
thanks
slightly
so
yeah,
so
we
turned
on
TF
o
in
the
creator's
update
and
turns
out
that
our
fallback
algorithm
was
not
good
enough.
G
So
the
failure
modes
actually
ended
up
impacting
user
experience,
and
these
are
some
of
the
verbatim
user
feedback.
We
started
getting
from
different
geographies,
so
this
wasn't
limited
to
like
a
particular
network.
This
was
actually
very
widespread
and
you
know
there
were
cases
where
the
user
would
turn
off
faster
one,
and
the
issue
would
persist
that
slide.
Please.
G
So,
essentially
we
had
to
roll
it
back.
So
we
had
to
roll
it
back
on
retail
bills.
Then
we
decided
that
the
only
way
to
kind
of
make
forward
progress
is
to
kind
of
build
our
own
middle
box,
which
interferes
with
TFO
and
we
call
it
TF.
No,
these
are
some
of
the
things
it
does,
so
it
drops
all
tier
4
segments
it
can
strips,
and
data
drops
in
segments
with
data
black
hole.
The
connections
after
they're
established
do
some
filtering
based
on
source
and
destination
IP.
G
These
are
basically
all
kinds
of
you
know,
test
cases
that
so
other
implementers
might
find
useful
for
testing
their
own
fallback
algorithms,
which
is
why
I
listed
it
here
so
and
then
we
worked
on
improving
our
fallback
algorithm,
which
would
pass
all
of
these
test
cases,
and
you
know
preserve
user
experience.
For
for
the
browser.
Can
you
go
to
the
next
slide
please?
G
So
the
algorithm
that
we
decided
to
go
for
is
passive
probing.
So
essentially
we
we
do
not
want
to
do
active
probing,
so
we
are
still
using
user
traffic
to
do
this,
but
we
want
to
be
as
conservative
as
possible.
So
what
we
essentially
do
is
the
one
thing
to
realize
is
that
to
fully
determine
whether
TFO
works
on
a
given
network,
you
need
multiple
connections
to
the
same
server,
because
you
know
the
first
connection
to
the
server
would
would
get
you
the
cookie,
but
actually
access
is
the
key
cookie.
G
You
need
to
go
to
the
same
server
again,
but
we
did
not
want
to
actually
open
the
floodgates
when
the
user
starts
browsing.
So
essentially,
what
we
do
is
we
only
allow
one
probe
connection
at
a
time
to
proceed
on
a
particular
network.
So
when
you
connect
to
a
network
initially
we
would
start
probing,
but
then
we
would
not
let
multiple
connections
start
probing
TFO
simultaneously
a
given
probe
connection.
Whenever
it's
closed,
we
look
at
whether
there
was
no
recert
in
response
to
the
syn.
There
was
no
send
timeout,
no
full
connection.
G
Timeout
data
was
exchanged
in
both
directions.
The
connection
wasn't
cancelled
by
the
upper
layer
and
there
was
no
certainly
RTT
increase.
So
essentially,
if
all
of
the
following
hold,
it
means
that
the
connection
succeeded,
but
remember
that
we
needed
multiple
connections,
so
the
the
the
state
machine
actually
progresses
towards
whether
we
just
caught
the
cookie
or
actually
exercise
the
cookie.
G
Once
a
probe
connection
actually
succeeds
succeeds
in
exercising
the
cookie,
which
means
it
used
data
in
the
syn
packet,
and
none
of
these
bad
things
happened.
We
declared
the
network
as
successful
for
Tier
four
if
our
network
hits
fall
back
for
any
of
these
reasons,
we
persistent
and
we
never
attempt
Tia
for
again
until
the
next
operating
system
update,
which
means
you
know
every
six
months.
This
state
would
get
reset
and
we
will
start
probing
again.
If
a
networks
hit
success,
then
we
stop
the
probing
process
and
basically
for
the
entire
boot
session.
G
G
So
the
good
news
is
that
so
fall
creators
update
is
actually
rolling
out
worldwide
right
now,
it's
been
rolling
out
for
about
a
month,
and
this
is
the
first
retail
release
where
we
have
successfully
shipped
efo
and
not
had
to
roll
back,
which
kind
of
demonstrates
that
this
fallback
algorithm
is
very
effective
question
under
the
clarification
question
general
inger.
What
data
do
you
send
in
the
in
the
in
these
connections?
G
These
are
actually
not
so
these
probes
are
using
user
traffic,
so
they're
not
active
probes,
so
it's
basically
the
edge
browser,
for
example,
going
to
google.com.
So
you
know,
if
you
go
to
google.com
twice,
essentially
those
would
you
know
two
of
those
question
connections
will
end
up
acting
as
the
probes,
so
I
see.
So
when
you
say
multiple
connections,
they're
on
multiple
simultaneous
connections
so.
G
Choosing
some
connections
arbitrary-
perhaps
yes,
but
it's
basically
one
connection
in
ordered
by
time.
So
essentially
it's
like
a
semaphore,
so
only
one
connection
is
acting
as
a
probe
at
a
time
and
the
advantage
of
doing
that
is
that
if
there's
a
real
problem,
we
don't
sacrifice
too
many
active
browser
sessions
for
being.
P
G
So
we
have
a
network,
identification
scheme
and
windows,
so,
for
example,
you
could
use
the
default
gateways
MAC
address.
There
is
a
way
to
determine
that
the
network
is
unique.
So
even
if
the
interface
is
the
same,
when
you
connect
to
a
new
network,
we
are
able
to
identify
that
it
is
a
new
network.
P
Okay,
that's
what
I
thought
so
what
happens
if
there
is
a
particular
website
that
I
access,
which
is
behind
some
kind
of
simple-minded
effective
firewall
that
breaks
all
TFO,
so
that
gives
the
appearance
that
TFO
is
broken
for
me
on
100%
of
the
networks
I
visit,
but
only
for
that
one
site:
that's
got
the
defective
intrusion
detection
system
in
front
of
it.
That.
G
Is
correct
so,
in
that
case
yeah
the
assumption
here
is
that
the
problem
is
closer
to
the
user
rather
than
closer
to
the
server,
and
the
goal
here
was
not
to
be
increased
coverage
of
TFO
but
was
rather
to
unblock
deployment,
which
is
this
is
the
first
time
we
are
able
to.
You
know,
do
it
without
having
to
roll
back
so
be
as
conservative
as
possible
and
start
out
small
and
kind
of
improve
coverage
over
time.
G
G
One
of
the
problems
that
we
have
the
current
algorithm
is
that
if
you
notice
a
sin
timeout,
it
actually
results
in
fall
back,
which
could
we
do
to
bad
connectivity.
So
we,
and
because
we
don't
persist,
success,
it's
possible
that
over
time
the
number
of
devices
actually
reduces,
because
now
you
could
have
a
device
that
succeeded
on
tier
4,
reboots
and
then
next
time
it's
a
sin,
timeout
for
bad
Network
conditions
and
actually
turns
of
tier
4.
So
this
is
the
preliminary
number
it's
possible
that
over
time,
this
number
may
change
for
the
worse.
G
We
did
an
a/b
test
as
targeting
a
retail
population
with
TFO
on
and
off.
One
of
the
success
criteria
for
us
was
that
there
should
be
no
measurable
increase
in
page
navigation
failures,
and
that
turned
out
to
be
true,
which
means
the
algorithm
is
actually
working
as
as
designed
the
failures
that
we
are
finding
are
correlated
with
geography.
So
we
certainly
find
that
there
are
certain
countries
where
TR
for
success
rates
are
very
poor.
We
also
find
that
they're
correlated
with
specific
networks,
so
there
are
certain
networks,
were
you
know?
G
Tfo
again,
the
data
here
is
I
cannot
present
here,
but
if
you
have
any
more
questions
about
that,
I'd
be
happy
to
answer
offline
yeah,
as
I
said
the
same
time
out.
Heuristic
right
now
makes
this
algorithm
extremely
aggressive
in
terms
of
falling
back.
So
one
of
the
future
improvements
we're
looking
at
is
only
fall
back.
If
you
know
on
the
subsequent
scene
retry,
we
essentially
remove
the
option
so
fall
back
only
if
the
subsequent
syn
actually
succeeds,
which
would
be
the
case
where
it's
not
really
a
network
connectivity
problem.
G
G
The
other
thing
we
are
looking
at
as
we
are
removing
some
of
these
criteria
and
see
what
happens
in
like
an
a/b
test
and,
of
course,
we
would
also
want
to
work
with
all
of
the
networks
where
we
are
finding
these
problems
to
kind
of
improve
the
success
rates
over
time.
Next
slide,
please.
So.
This
is
a
basically
breakdown
of
kind
of
failures
that
we're
seeing
the
black
one
doesn't
show
Creely,
where
it's
actually
a
reset
in
response
to
sin.
B
N
Sure
so
I'm
one
of
your
previous
slides.
Oh
it's,
our
iPad
McManus
with
a
firefox
and
the
one
before
this
there
you
go.
We
talked
about
full
date,
a
time
out,
I'm
curious.
If
you
can
talk
a
little
bit
more
about
what
what
that
is
in
the
role
it
plays,
so
we've
spent
a
bunch
of
effort
trying
to
enable
this
feature
and
the
the
good
news
is.
We
actually
see
I
think
a
somewhat
higher
number
than
you.
N
You
report
on
the
next
slide
of
people
who
successfully
use
it
and
it
has
a
good
impact
and
we
have
a
lot
of
mitigation
strategies
to
to
the
point
where
we're
currently
deployed.
They
reside
mostly
in
the
things
you're
talking
about
here.
You
know:
resuts,
sins
timing
out
no
data
being
exchanged
so
on
and
so
forth.
N
B
N
G
So
I
can
explain
the
full
data.
Timeout
is
basically
not
the
same
time.
Also,
essentially
you
send
data
and
then
you
hit
retransmission
timeouts
and
you
could
not
recover
so
you,
basically
essentially
the
the
middle
box
black
hole,
the
connection
completely,
so
that
would
be
the
full
data.
I
want
case
your
point
over
TLS
one
three.
We
don't
have
experience
because
we
haven't
actually
turned
TLS
1.3
on
on
in
any
of
these
experiments.
So
right
now,
experimentation
is
limited
to
TFO
and
we
are
not
experimenting
with
either
TLS
1.3
or
ECM.
G
G
Jenna
Inger,
so
how
does
this
so
first?
Thank
you
for
doing
this.
It's
it
and
bring
the
data
to
data
here.
This
is
absolutely
is
very,
very
useful.
Continuing
along
the
thread
that
Patrick
just
started
here,
I
think
it
would
be
useful
to
have
some
baseline
numbers,
obviously
doing
a/b
comparison,
but
the
number
of
remotes
is
part
of
the
total
failures,
but
it's
not
clear
how
much
of
this
is
due
to
network
error
that
could
that
would
affect
other
connections
as
well.
G
G
Also,
there
was
similar
that
would
have
caused
it
to
look
like
a
full
data
timeout
after
the
handshake
was
completed
so
I
think
the
the
full
data
timeout
after
a
handshake
is
completed
is
actually
a
particular
pathology
that
we
may
want
to
look
at
more
carefully
sure
yeah,
good
feedback,
we'll
work
on
getting
those
numbers.
Can
you
go
to
the
next
slide?
Yes,
I'll
rush
through
them,
so
quickly
an
update
on
cubic,
so
we
did
find
that
compound.
Tcp
is
actually
very
sensitive
to
delay
fluctuations.
G
We
find
that,
especially
in
data
centers,
with
virtual
networking
that
there's
a
bimodal
latency
distribution
to
which
compound
PCP
reacts
very
badly,
so
we
are
switching
to
cubic
default.
That's
already,
you
know
shipped
in
both
the
client
operating
system
for
all
connections
and
for
the
server
operating
system.
One
high
latency
connections
next
slide.
Please
next
slide
just
a
marginal
improvement
for
client
upload
connections,
not
really
much
to
say
here,
because
most
of
the
client
connections
are
not
really
like
high
bdp.
You
know
large
condition,
window
connections
next
slide.
G
This
just
demonstrates
one
of
the
problems
for
C
TCP.
We
hit
losses
going
from
one
region
to
other,
is
a
high
latency
link
with
lot
of
losses
and
CDs
B
sometimes
gets
into
the
state.
Wade
doesn't
ramp
up
because
it
finds
that
the
latency
has
increased
a
lot
maximum
cubic,
on
the
other
hand,
is
able
to
successfully
ramp
up,
regardless
of
you
know,
variations
and
essentially
end-to-end
latency.
So
this
is
just
demonstrates
that
cubic
is
kind
of
working
better
for
our
service
in
areas
overall
performance
wise
next
time.
Any
other
questions.
P
Steel,
chair
sure,
Apple
just
wanted
to
echo
what
Jonna
said.
Thank
you
for
doing
this.
I.
Take
the
point
you
made
earlier
that
at
this
stage,
is
not
important
to
use
TFO
in
every
place
that
it's
possible.
It's
important
to
be
able
to
use
it
safely
and
not
have
the
customers
worry
about
that
and
if
we
can
get
90%,
then
then
that
clears
the
cobwebs
from
the
internet
and
then
we
can
fight
for
the
next
10%
by
fixing
this
broken
middle
boxes.
So
I
think
this
is
great
direction.
Thank
you.
Yeah.