►
From YouTube: IETF103-TCPM-20181105-1610
Description
TCPM meeting session at IETF103
2018/11/05 1610
https://datatracker.ietf.org/meeting/103/proceedings/
A
A
That
means
you
agree
to
follow
ITF
processes
and
policies.
If
you
have
any
concerns
about
it,
please
read
is
not
well
very
carefully
very
secure.
You
can
find
the
same
content
on
the
ITF
webpage
and
rosy
sticks
thanks
for
Gauri
and
reach
out
for
note-taking,
and
we
might
want
to
one
person
who
takes
called
a
miracle
and
if
we
could
pourraient
here
or
checking
me
to
echo
they'll
be
great.
A
Thank
you,
and-
and
this
is
a
simple
reminder
when
you
speak
up
during
the
meeting-
please
say
your
name
at
Mike
before
you
speak
up
something
so
that
a
note-taker
can
track
your
name
in
the
minute
and
also
when
you
submit
your
individual
internet
draft.
Please
include
TCP
m
in
your
draft
names
so
that
chairs
can
track
the
status
of
your
draft.
B
A
And,
as
we
already
announced
in
this
Bangkok
meeting,
we
have
Chris
Rock
a
fast
road
which
we
are
having
right
now
has
four
items.
First,
the
chair
will
talk
about
the
working
group
status
and
after
that,
Congress
will
talk
about
TCP
constraint,
no
networks
draft,
and
this
draft
is
actually
a
working
group
item
or
a
week
working
group.
However,
this
draft
contains
many
TCP
stuff,
so
it
is
to
some
extent
TCP
and
berated.
So
was
every
week
working
with
the
chair
and
TCP
I
am
working.
A
Group
chair
agree
to
run
working
group
Roscoe
on
both
working
group
once
this
drafts
that
becomes
ready
and
then,
according
to
the
authors,
this
draft
is
getting
ready
for
the
working
group
rustico.
So
that's.
Why
we're
having
meeting
having
discussion
on
this
raft
in
this
meeting
so
that
we
can
check
this
draft
is
ready
from
the
kishi
p.m.
working
girl's
point
of
view.
That's
why
we
allocated
20
minutes
for
this
presentation
and
then
next
item
is
up.
January,
chien
and
Bob
talks
about
his
fraud.
A
Okay,
so
now
I'm
moving
to
the
status
of
document,
and
so
so
we
have
finished
alternative
back
of
easy
and
draft
couple
months
ago
and
today
about
an
hour
ago,
something
we
have
received
a
email
from.
We
are
you
see,
that's
no.
They
have
a
property
struck,
so
this
loft
will
be
published
very
soon
and
thank
you
so
much
for
your
cooperation
and
then
we
have
several
working
in
documents
and
generate
egn,
and
this
draft
has
been
updated
recently
and
then
this
is
one
of
the
agenda
items
for
today
and
I'll
tear
your
considered
draft.
A
This
draft
just
has
to
be
updated
about
a
week
ago
and
now
we'll
talk
about
a
little
bit
more
after
this
and
also
1719
Cerebus.
This
is
also
updated
recently,
and
so
it's
keep
updating,
keep
progressing
and
also
we
teach
EPM
combat.
The
draft
has
been
updated
recently
a
week
ago.
So
if
we
are
interested,
please
read
and
the
please
give
us
a
feedback
and
then
accurately
Sheehan.
We
don't
have
updates
since
try
but
I'm
like
we're,
expecting
that
new
version
will
come
from
the
author,
at
least
by
the
next
IDF
and
TCP
Emrick.
A
This
draft
has
not
been
updated
since
dry,
bara
I
think
you
know.
Basically,
we
are
expecting
some
feedback,
mainly
from
of
implementers,
and
so
if
we
can
have
some
opinions
or
feedbacks
and
please
let
us
know
and
then
so
that
no
OSA
can
update
a
new
version
and
in
your
draft
there
is
no
big
update
idiom
draft
for
a
while,
but
according
to
be
also
they
are
doing
some
experiment.
So
once
they
have
some
result
from
the
experiment,
they
will
come
back
with
new
version.
That's
the
status.
A
A
You
know
we
have
very
long
discussion
about
how
to
sort
out
the
requirement
in
this
document
and
the
other
document
and
such
as
RFC
62,
98,
RFC,
80,
85
or
793,
B's,
etc,
and
so
one
tricky
thing
is
know
if
the
Monday
after
says
this
is
mass
and
if
the
other
drug
says
this,
should
you
know
what
what's
the
message
we
should?
You
know
give
the
user
readers.
A
That's
kind
of
thing
has,
you
know,
results
in
very
wrong
discussion
and
because
of
that
know,
we
have
very
long
discussion
and
after
that,
no
we
suspended
for
a
while
and
then
recently
we
have
very
nice
discussion
with
authors
and
then
also
this
basically
updates
a
direct
to
address
these
issues.
So
basically
the
authors
create
a
new
section
which
is
section
two
and
then
in
this
section
they
also
describe
how
hots
a
motivation.
A
This,
which
kind
of
in
the
situation
that
this
trust
can
be
useful
and
if
there
is
some
difference
between
requirement
of
distrust
and
that
the
other
drug,
how
do
you
know
you
should
not
solve
it?
How
you
be
this
draft
of
the
message?
That's
the
point
of
new
updates,
and
so
the
chairs
think
this
update
is
pretty
nice
and
then
we
are
expecting
this
can
solve
the
previous.
A
You
know
baby
in
a
wrong
discussion,
and
so
basically,
what
we
are
you
know
asking
you
guys
is:
please
read
this
draft
and
the
priests
give
us
feedback,
and
if
this
new
section
so
the
you
know
previous
conflict
and
then
if
people
think
this,
you
know,
new
bathroom
looks
very
good
and
we
can
start
thinking
about.
You
know
publishing
at
our
initiating
working
group
of
that
both
these
drafts
so
that
we
can
publish
this
rough
I
think
this
draft
is
to
some
movement
useful
for
some
situations.
A
A
Move
on
to
the
next
one
and
then,
as
I
mentioned,
we
will
have
a
second
slot
for
TCP
meeting
it's
tomorrow,
Tuesday
from
11:00,
tiny
to
it
12:20,
and
this
is
a
special
session
to
discuss
congestion
control
and
was
recovery
mechanism
in
quick
and
TCP.
And
so,
as
you
know,
quick
utilize,
lots
of
the
TCPS
mechanism
for
Ross
recovery
and
the
congestion
control.
But
since
quick
and
TCP
it's
different
protocol
that
Bruce
becoming
Rosso
recovery
and
the
congestion
Khatami
Kansas
rightly
different,
and
then
we
would
like
to
discuss,
but
why
this
is
different.
A
And
what's
the
implication
of
this
difference,
and
then,
by
doing
this,
we
might
be
able
to
run
from
quick
what
we
can
contribute.
A
quick.
In
any
case,
no
I
think
this
will
be
useful
discussion
for
both
quick
and
TCP
and
working
group.
And
since
we
have
one
hour,
basically,
we
studied
from
key
mechanism
in
quick,
because
we
think
some
Forks
may
not
familiar
with
quicks
no
mechanism.
So
the
en
and
Jana
will
talk
about
the
basic
quick
mechanism
and
then,
after
that,
we
dive
into
Ross
detection
and
congestion
control
mechanism
with
comparison
to
TCP.
A
C
Hello:
everyone,
my
name,
is
Carlos
Gomez
and
I'm.
Going
to
present
the
draft
entitled
TCP
is
a
togetherness
in
the
Internet
of
Things.
The
other
offers
of
the
draft
John
Cockcroft
and
Michael
sharp,
and
the
main
goal
of
this
presentation
is
to
provide
an
overview
of
the
main
contents
in
the
draft
seems.
As
the
chair
already
explained,
we
are
Eastern
requested
working
group
last
call
for
this
document.
C
C
Al
wig
is
the
working
group
that
deals
with
lightweight
implementation,
guidance
for
IOT
scenarios,
Internet
of
Things
scenarios,
and
the
document
has
been
presented
since
then
at
every
subsequent
a
week
session
and
became
a
working
with
document
document
after
ITF
99,
whoever
at
ESPN
has
been
kept
in
the
loop
through
the
mailing
list
whenever
there
has
been
any
date
of
the
draft
and
a
new
published
version.
Also
a
quick
heads-up
on
the
preparation
for
the
working
group.
Last
call
for
this
document
was
given
in
the
last
ITF,
so
the
last
revision
is
0.
C
Okay,
so
let's
go
into
the
the
actual
content,
so
the
first
section
introduction
provides
the
motivation
and
the
goal
for
this
draft
on
the
motivation
we
have.
The
TCP
has
quite
often
being
criticized
as
a
protocol
for
IOT
scenarios
and
well.
Some
of
the
reasons
might
be
valid,
such
as
dreams
on
the
size
of
the
TCP,
header
or
TCP,
not
being
a
good
fit
for
multicast
and
some
others.
The
point
is
that
some
of
the
main
claims
against
TCP
in
IOT
scenarios
are
actually
not
valid.
C
One
of
them
is
to
speed
being
too
complex
for
IOT
devices,
however,
exactly
possible
to
implement
ECT
in
a
quite
lightweight
way.
Also,
another
typical
claim
is
that
TCP
and
the
performs
in
wireless
scenarios.
That's
true.
However,
another
protocol,
which
is
called
that
has
been
specifically
designed
for
IOT
scenarios,
also
has
some
end-to-end
reliability
mechanism
built-in,
and
it.
D
C
Actually,
the
same
problem
also
on
the
performs
when
problems
in
wireless
links
happen
because
of
this
difficulty
to
determine
whether
losses
are
due
to
correct
corruption
or
congestion.
However,
nobody
has
criticized
that
aspect
in
coop,
so
in
any
case,
the
consequences
that
TCP
has
quite
frequently
been
neglected
as
a
protocol
for
IOT.
However,
nowadays
the
situation
is
that
TCP
is
actually
being
used
in
many
IOT
scenarios,
and
this
could
even
increase
in
the
near
future.
For
example,
MQTT
is
an
application
layer
protocol,
that's
quite
popular.
In
IOT
scenarios
it
uses
TCP
at
the
transport
layer.
C
Also,
there
are
some
devices
in
IOT
environments
that
use
HTTP
and
also
coop
is
an
application
layer
protocol
that
was
designed
originally
over
UDP.
However,
recently
there
has
been
a
new
version
of
coops
specified
over
TCP,
so
the
goal
of
the
document
is
providing
guidance
on
how
to
CP
can
be
used
configured
or
implemented
in
IOT
scenarios.
C
So
then,
let's
talk
about
the
characteristics
of
constraint,
node
networks,
which
are
typical
of
IOT
scenarios
that
may
be
relevant
for
TCP.
First
of
all,
constraint,
networks
comprise
constrain
devices,
which
are
devices
that
present
significant
constraints
on
processing
memory
and
energy
resources.
For
example,
these
devices
may
have
32-bit
16
bit
or
even
8-bit.
Cpus
memory
may
be,
for
example,
in
the
order
of
10
K
or
a
few
tens
of
kilobytes
of
RAM,
and
some
of
these
devices
may
need
to
live
on
a
simple
battery
for
possibly
years.
C
On
the
other
hand,
the
technology
is
used
by
these
devices
typically
offer
low
bid
rate.
That's
because
usually
for
most
use
cases
in
IOT,
that's
enough
and
present
high
loss
rate
due
to
corruption
and
variable
link
quality,
and
that's
because
mostly
the
technologies
is
by
these
devices.
For
communication
are
wireless
or
in
some
cases
they
are
wired,
such
as
power
line.
Communication.
However,
also
those
who
are
clones
exhibit
some
what
was
like
characteristics.
C
Then
these
devices
are
connected
to
the
Internet,
maybe
through
a
router
through
a
gateway,
sometimes
through
a
direct
link
or
in
other
cases,
there's
a
multi
hop
path
between
the
constrained
device
and
the
connection
point
the
router
or
the
gateway.
Then
a
typical
scenario
where
TCP
is
being
used
nowadays
in
constrain
node
networks
in
IOT,
is
the
one
shown
on
the
slide.
C
So
next,
let's
talk
about
section
four,
which
provides
guidance
on
how
to
implement
and
configure
TCP
for
constrain
node
networks.
So
this
section
is
organized
in
three
main
subsections.
The
first
one
deals
with
parameters
and
mechanisms
that
relate
with
both
properties.
The
second
one
provides
guidance
for
devices
that
can
only
use
very
small
window
size
and
the
last
one
provides
more
general
recommendations
for
devices
that
can
afford
larger
window
sizes,
so
the
first
subsection,
both
properties
first
provides
some
guidance
about
the
MSS.
C
First,
we
explain
that
in
many
a
IOT
technologies,
the
in
layer
MTU,
is
pretty
short.
There
are
a
few
tens
of
bytes
or
few
hundreds
of
bytes.
That
means
that,
in
order
to
support
ipv6
on
top,
there
needs
to
be
some
adaptation
layer
which
in
some
cases
includes
fragmentation
or,
in
some
other
cases,
fragmentation
is
supported
by
the
lower
layer
technology
itself,
and
in
any
case,
when
there
is
this
addition
layer,
you
still
alight
musics.
Typically,
the
depletion
layer
defines
an
MTU
of
twelve
hundred
and
eighty
bytes.
C
Then
there
may
be
also
other
technologies
used
in
IOT
which
support
larger
and
to
use.
However,
in
any
case,
we
explain
that,
for
the
sake
of
lightweight
implementation,
it
may
be
general
generally
desirable
to
limit
the
MTU
to
280
bytes
in
order
to
avoid
the
need
to
support
path.
Mtu
discovery
related
recommendation
is
to
set
the
MSS
to
a
value,
not
larger
than
120
bytes.
C
Finally,
we
also
discuss
specific
loss
notification
mechanisms.
We
explain
that
these
are
actually
interesting
because
they
might
allow
determining
whether
a
packet
loss
is
due
to
corruption
or
due
to
congestion.
However,
such
mechanisms
to
this
day
remain
mostly
experimental.
They
have
not
been
standardized
and
are
not
widely
deployed.
C
Then
the
second
subsection
provides
guidance
for
devices
that
have
a
low
amount
of
memory
and
can
only
support
very
small
window
sizes
and
for
devices
that
can
only
support
a
single
MSS
send
and
receive
window.
Then
it
is
that
tcp
may
be
implemented
in
just
a
very
simple
way,
with
very
simple
congestion
control.
There
are
some
mechanisms
that
are
not
actually
needed
in
this
case,
and
this
leads
to
some
stop-and-wait
behavior,
which
is,
however,
often
sufficient
for
most
IOT
use
cases,
for
example,
co-op
with
its
end-to-end
reliability
mechanism.
D
C
So
if
that's
at
all
possible,
the
recommendation
would
be
to
disable
the
late
acknowledgments
at
the
receiver
and,
on
the
other
hand,
a
single
MSS
sender
talking
to
some
receiver,
that's
using
the
late
acknowledgments
would
suffer
this
unnecessary
and,
however,
there's
a
known
workaround
on
this,
which
is
called
split
hack,
which
is
based
on
breaking
the
segment
in
two
parts
and
sending
those
as
two
different
IP
packets,
one
right
after
the
other.
In
order
to
avoid
this
delay,
that
would
be
contributed
by
the
delay.
Acknowledgments
mechanism.
C
Of
course
is
the
drawback
of
the
additional
packet
header
overhead
of
this
technique.
Then
we
explained
that
when
devices
use
very
small
window
sizes,
most
losses
may
be
detected
by
RTO
expiration.
Therefore,
the
RTO
algorithm
may
have
a
large
impact
on
performance
in
this
case,
and
this
opens
the
door
to
considering
some
kind
of
fine
tuning
for
the
RTO
algorithm,
although
of
course
this
would
need
to
be
done
carefully.
E
C
E
C
Okay,
thank
you.
So
then,
the
third
subsection
provides
more
general
recommendations
for
devices
that
have
larger
memory
can
afford
larger
window
sizes.
So
in
this
case
we
explained
that,
in
order
to
benefit
from
mechanism
satisfies
we
transmit
and
fast
recovery,
then
it's
to
be
a
large
enough
window
size,
for
example,
five
MSS
when
the
receiver
is
using
delayed
knowledge
means
then
also
we
discuss
selective
acknowledgments.
We
explain
that
because
there
may
be
losses
quite
frequently
in
this
kind
of
scenarios
due
to
the
wireless
links
typically
used.
C
Selective
acknowledgment
may
help
here
to
avoid
unnecessary
retries,
therefore
save
energy
and
width,
and
also
reduce
latency,
and
also
we
discuss
here
about
delayed
acknowledgments,
and
we
explained
that
for
small
messages
with
the
size
of
young
MSS
or
when
there
are
request
response
interactions,
disabling
delayed
acknowledgments,
the
receiver
would
be
recommended
if
at
all
possible.
However,
on
the
other
hand,
for
both
transfers,
delayed
acknowledgments
would
actually
reduce
the
number
of
moments,
so
the
best
configuration
for
this
mechanism
would
depend
on
the
actual
requirements.
The
actual
conditions
of
some
scenario.
C
Then,
in
section
five,
we
provide
some
recommendations
on
how
to
use
the
city
in
constrained
on
networks.
So
first
we
explain
that
typically,
the
regarding
connection
initiation.
It
would
be
best
if
the
connection
is
initiated
by
the
constrain
device.
That's
because
otherwise
it
might
happen
that
the
constrain
device
needs
to
be
listening,
always
with
the
rate
interface
in
receive
mode
consuming
energy
on.
Therefore,
the
battery
of
such
a
device
might
be
depleted
in
just
a
few
hours.
Then
we
also
provide
some
guidance
on
the
number
of
concurrent
connections
that
we
may
have.
C
C
Finally,
we
talked
about
the
TCP
connection
lifetime
and,
first
of
all,
assuming
a
typical
example
of
some
temperature
sensor.
That's
sending
updates
to
the
backend
every
30
minutes.
It
is
in
order
to
minimize
the
message
of
the
head.
A
long
TCP
connection
is
desirable.
However,
this
is
unfortunately
not
always
possible
because
quite
frequently
there
are
middle
boxes
between
the
two
endpoints,
such
as
firewalls
that
may
delete
the
filter
records.
Quite
quickly,
for
example,
after
5
minutes
10
minutes,
and
in
that
case
that
would
actually
break
communication
for
subsequent
data
packets.
C
So
one
alternative
would
be
to
say:
ok,
let's
maybe
establish
a
new
TCP
connection
for
every
new
data
message
that
needs
to
be
transmitted.
In
that
case,
of
course,
the
message
overhead
is
much
larger
and
the
point
is
that
maybe
in
some
scenarios
this
could
be
a
for
that,
but
in
some
others,
maybe
that's
too
much
so
one
alternative
to
that
is
use
of
TCP,
fast
open,
which
would
be
positive
in
the
sense
of
embedding
data
already
in
sim
packets
and
reducing
message
overhead.
C
However,
there
also
some
issues
with
TFO
that
need
to
be
taken
into
account,
such
as
sometimes
in
some
circumstances,
some
packets
being
repeated
to
the
application.
So
if
therefore
is
used,
then
necessary
mechanisms
to
take
care
of
this
problem
it
to
be
in
place.
There
are
also
other
alternatives,
such
as
application,
layer,
heartbeats
or
even
TCP
key
believes
that
could
be
used
in
order
to
avoid
this
early
deletion
of
filter
records
in
farlows.
Of
course,
then,
the
interval
between
consecutive,
heartbeat
or
key
believes
needs
to
be
low
enough
for
this
to
be
useful.
C
C
This
is
something
that
may
happen
frequently
when
devices
use
very
small
window
sizes,
in
the
sense
that
most
of
those
packet
losses
will
be
triggered
by
RT
expiration
and
therefore
such
devices
might
appear
as
potential
victims
for
this
kind
of
attack.
However,
there
are
some
mitigation
techniques
that
have
been
proposed,
such
as
RTO
randomization,
or
blocking
the
attack
by
a
router
based
on
analyzing
the
traffic
pattern
of
incoming
packets
and
checking
whether
it
correlates
with
a
potential
attack.
C
Finally,
we
have
a
section
in
the
document
which
is
an
annex
that
tries
to
summarize
the
main
features
of
some
of
the
main
discipline
plantations
for
constrain
devices
for
IOT
scenarios.
The
idea
is
that
we
try
to
cover
a
range
of
implementations
from
those
that
are
intended
for
very
simple
devices,
with
even
a
bit
CPUs
and
very
small
windows,
very
small
and
read
to
those
which
are
somewhat
more
powerful,
and
this
is
a
table
that
overviews
the
main
features,
the
main
parameters
and
whether
some
mechanisms
are
actually
implemented
or
not.
C
Well,
so
after
the
cutoff
internet
draft
submission
date,
there
were
some
comments
provided
by
Yoshi
would
be
initially.
Thank
you
very
much
by
the
way
for
the
review.
So
we
already
provided
responses
to
those
comments
and
our
plan
is
to
quickly
publish
the
next
revision,
which
is
0-5
incorporating
this
feedback.
G
G
It's
obviously
crypto
it's
seriously,
obviously
crypto
and
in
fact,
rather
bus,
rather
rather
bust
it,
and
if
you
want
to
use
it
here,
you're
gonna
have
to
write
a
lot
of
text
about
why
you're
constrained
devices
can
cope
with
md5
and
get
benefits
from
it
in
this
special
case
when
it
is
when
it
is
bad
engineering
practice
almost
and
almost
everywhere
else.
Okay,.
E
Ghana
Fair
has
died
very
quickly,
read
it
before
and
I've.
Had
a
quick
look,
three
now
I'm
happy
to
provide
comments
and
feedback
which
I
think
will
mostly
be
of
the
style
I
see
this,
but
maybe
you
should
think
of
that,
so
that
indicates
it
may
be
ready
for
a
working
group.
Last
call
and
I
didn't
see
the
summary
table
anywhere.
A
D
H
H
A
H
H
These
in
capability
and
the
proposal
comes
in
two
parts:
one.
If
you
negotiated
accurately
CN,
you
can
put
easy
and
capability
on
any
every
packet,
just
everything
just
just
if
you
haven't
negotiated
equity
and
feedback,
which
means
you're
using
if
you're
using
UC,
and
you
must
use
in
3168.
You
can't
put
it
on
the
sin
and
the
pure
act,
but
you
can
put
it
on
everything
else.
Alright
and
I'm
not
going
to
go
through
the
right-hand
column.
That's
all
in
the
draft
and
as
I
said,
this
is
terms
in
the
draft.
H
That's
just
a
summary
of
it,
but
that's
the
congestion
response
that
you're
likely
to
do
this
draft
isn't
about
congestion
control,
it's
just
about
while
protocol,
but
essentially
we
didn't
want
to
say
nothing
about
the
congestion
response.
Once
you've
started,
putting
congestion
signals
on
a
packet.
So
essentially
what
we
used
to
say
is:
do
it
like?
You
would
do
this
so
a
minimal
statement
of
what
we
should.
You
would
most
likely
do
if
you
get
congestion
on
one
of
those
sort
of
packets,
that's
more
difficult
on
some
of
them.
H
So
that's
a
summary
one
or
two
other
bits
and
pieces
when
I
say
accurate,
ECM
negotiated,
obviously
on
the
scene.
You
haven't
the
go
sheet
or
anything
because
you're
just
sending
the
first
packet,
so
it
means
requested
in
that
case,
in
this
table
and
the
two
cases
was
a
little
too
by
them
only
apply
where
you
are
actually
getting
the
signals,
not
in
the
obviously
in
the
3168
cases,
where
you're
not
so.
D
H
H
H
So
we
thought
we'd
finish
this
draft,
but
some
a
relatively
major
editorial
issue
came
up
and
also
three
technical
issues
and
that's
what
this
talk
is
going
to
be
about,
which
is
why
I
asked
for
20
minutes.
The
editorial
issue
was
that
it
was
a
couple
of
people
said
it's
really
difficult.
If
you're
not
going
to
implement
accuracy
and
to
follow
this
draft,
can
you
somehow
separate
out
the
two
a
bit
better
because
and
I
have
to
admit
this
was
true.
H
So
we've
fixed
that
I'll
explain
that
the
technical
issues
were.
It
was
agreed
in
the
working
way
that
we
would
allow
see
on
a
pure
act
if
you're
using
equities
yen,
and
this
draft,
as
I
said,
isn't
about
congestion
control.
What
we
had
to
say
something
we
there
we
had.
We
had
to
give
some
hints
as
to
maybe
some
guidelines
on
how
you
might
design
the
congestion
control
for
any
congestion
on
the
pure
act.
H
He
in
here
yes,
then
that
was
that
was
the
study.
Wasn't
there
I
mean
the
results
with
the
iana
and
so
explain
man
and
finally,
we've
had
to
widen
the
scope
a
bit
a
bit
of
mission
creep,
maybe
to
some
receiver
activities
to
do
with
packet,
validation
and
acceptance.
So
all
that
will
be
explained
next,
all
right.
So
this
first,
let's
get
the
editorial
issue
over
first
there
there
was
a
it
wasn't
just
editorial.
H
It
was
more
a
process
problem
in
that,
if
either,
if
you
choose
not
to
implement
actor,
ACN
or
because
accuracy
Emmons
experiment
in
itself,
as
is
EC
n
plus
plus,
given
this
draft
has
sections
that
depend
on
another
experiment,
there
was,
it
was
a
sort
of
queasy
feeling
that
that
experiment
might
change,
it
might
evolve.
The
final
standards
track
thing
might
be
different,
might
even
end
in
failure
and
all
the
rest
of
it.
H
So
it
needs
to
be
clear
what
you
do
without
that
experiment,
and
so
anyway,
suggested
solutions
from
gory
and
micro,
Sheriff
well
and,
and
others
actually
would
either
split
the
draft
into
two,
nearly
identical
drafts,
just
through
slight
differences.
I
really
didn't
like
that,
because
I
hate,
having
sort
of
identical
tech,
you
know,
90%
of
the
tech
is
identical
and
temper
said
not
identical.
That's
sort
of
offends
my
single
source
type
sentiments.
H
The
other
one
was
to
have
an
appendix
explaining
what
aspects
of
the
draft
depend
on
accurate,
easy
and
particularly
that
actually
that's
not
very
well
written
I
want
to
roam
at
appendix
explaining
if
you
are
only
implementing
the
non
a
great
en
part
which
parts
of
the
draft
you
do
anyway,
we
took
neither
of
those.
The
solution
was
to
actually
write
it
properly,
essentially
and
to
take
the
sections
where
there
was
a
dependence
on
ECM
and
to
split
them
into
two
and
have,
if
you're,
using
negotiating
out
go
easier,
and
you
do
this.
H
If
not,
you
do
this
and
in
those
sections,
every
subsection
we
remind
you,
know
if
you
are
not
doing
a
crazy
and
ignore
this
section.
If
you're
not
doing
I
could
ignore
this
section
so
just
to
make
it
easy
right
and
that's
that,
if
you
ever
would
I
hope,
I
haven't
actually
solicited
feedback
from
Michael
and
gory
as
to
whether
that
would
solve
the
problem.
I
did
have
a
make
me
of
Gary
last
time,
but
not
with
Michael
about
it.
I
So
I
understand
the
problem
of
having
a
pen
deceit
on
another
experimental
break,
but,
like
I,
think
it
actually
this.
Currently
the
draft
says
it
is
recommended
to
also
implement
a
cookie,
and
this
and
part
of
the
editorial
problem
was
also
because,
even
if
you
implement
equity
CN,
you
might
end
up
with
an
on-air
card
easier
and
receiver,
and
so
here,
if
the
different
cases,
and
yet
those
needed
to
be
separated
anyway
right.
I
But
on
the
same
case
when
you
actually
implement
this
draft
HT
I
really
think
you
must
implement
also
occurred
easy,
and
we
have
to
do
this
to
experiments
together
because,
like
actually
it's,
it
would
be
much
safer
to
deploy
together
with
equity,
C
and
because
equity
simply
writes
the
feedback.
You
need
on
this
earth.
H
Actually,
III
Eli
did
a
detail
on
that
previous
slide
in
the
synth
section
it
says
if
you
are
requesting
accurate,
easy
n
or
not
requesting
it,
because
obviously
don't
know
what
what
whether
the
negotiation
going
to
happen
on
the
pure
oxide.
It's
if
you
have
negotiated
your
accuracy
and
then
you
do
the
periodic
behavior.
H
H
I
I
H
H
H
So,
as
I
said,
congestion
response,
specifics
are
out
of
scope,
but
we
have
to
say
something
on
most
of
the
control
packets.
We
can
just
say,
use
the
usual
condition
window
or
initial
window
response,
which
is
what
we
do.
We
make
up
some
stuff
on
the
syn/ack
case,
but
the
pure
AK
is
a
difficult
one,
because
partly
cuz
there's
a
lot
of
confusion
in
this
area,
which
are
time,
D,
confuse
and
partly
because
it's
difficult.
H
So
ultimately,
the
specifics
need
to
be
defined
for
each
congestion
control,
because
this
isn't
a
congestion
control
document.
But
if
I
now
give
you
some
ideas
and
guidance
that
will
and
see
what
you
think
of
this
guidance.
So
even
if
this
guidance
is
wrong,
it
doesn't
really
matter
because
it's
not
normative,
but
it
shouldn't
be
completely
wrong.
So.
H
Write
a
see
e
marked
pure
AK
he's
part
of
an
aggregate
causing
congestion.
So
the
first
thing
to
say
is
that
if
you've
got
this
model
in
your
head,
that
there's
a
server
blasting
data
down
at
some
client
and
the
client
is
sending
a
pure
extreme
back
with
no
data
in
it
and
it's
getting
a
seee
mark
on
some
of
those
pure
acts
that
isn't
necessarily
the
whole
story
number
one
case
there
may
be
another
flow,
either
from
your
own
machine
or
someone
else's
machine
in
the
upstream.
H
H
So
that's
the
the
idea
of
these
pictures
is
that's
the
gray
case,
your
the
red
case,
the
the
extreme,
with
the
little
dots
and
the
gray
is
something
else
all
right
something
big
next
year.
It
may
be
that
your
own
flow
is
also
sending
data
interspersed
with
the
acts.
When
you
get
a
congestion
notification
on
the
ACK,
you
don't
necessarily
want
to
add
all
the
logic
to
say.
Am
I
sending
other
packets
or
not
at
this
time
and
all
the
rest
of
it?
H
You
want
to
be
able
to
know
something
black
brainless
to
do,
and
the
third
case
is
the
one
that
everyone's
assumed
that
till
now,
which
is,
if
you're
getting
seee
ya'
neck.
It
means
you're
only
sending
axe
and
that's
not
necessarily
a
case,
but
it
could
be
so
in
the
draft
informative.
Only.
We
suggest
two
potential
responses,
one
very
optional,
if
you
can
have
very
optional,
a
manner
is
to
use
the
ACT
congestion
control
RFC.
The
reason
that's
optional
is
quite
complicated.
H
What
it
gives
you,
but
that
is
a
way
of
slowing
down
the
extreme
or
increasing
your
delay
back
factor.
Essentially,
you
can't
slow
down
the
other
end,
and
then
it
also
suggests
that
you
reduce
your
C
wind,
even
though
it's
a
pure
act
with
no
data
on
it
proportional
to
the
sum
of
the
C
email
petabytes
and
the
cmart
data
bytes
divided
by
all
head
of
Isis,
plus
all
Vega
bytes.
In
other
words,
the
sum
of
all
your
C
e
bytes,
divided
by
the
sum
of
all
your
bytes
taking
a
nominal
header
size.
F
H
I
Case
is
one
case:
is
you've
been
sending
data,
so
your
congestion
window
is
actually
big,
but
you
stopped
sending
data
and
you
only
send
pure
X
number,
but
you
get
the
congestion
feedback
for
the
PX.
Should
you
now
reduce
you,
big
congestion,
renew
based
on
that
feedback?
That's
case
one
and
the
second.
I
The
sender
of
the
Pew
X
you're,
the
sender
but
you've
you've
been
sending
data
and
then
you
know,
you've
been
sending
that
you
sent
data,
no
user,
send
a
sin
theta
you
stopped
sending
later,
but
you
still
send
egg
because
there
might
be
data
flowing
in
those
directions.
So
at
that
point,
where
you're
only
sending
X
the
congestion
window
can
still
be
big
because
you've
previously
sent
data,
and
now
you
get
a
congestion
marking
on
the
egg.
Should
you
reduce
the
congestion
window
in.
I
The
question
in
the
end
is
yes,
but
how
much
should
you
in
the
second
case,
is
also
if
you're
sending,
which
is
addressed
by
this,
but
might
be
similar
if
you're
sending
data
annexed
in
the
at
the
same
time
right
in
and
you
get
in
marking
on
the
egg?
Does
it
have
an
impact
on
your
congestion
in
them.
J
H
So
the
idea
of
this
formula
is
essentially
that
you
scale
down
your
congestion
response
by
the
size
of
the
packet,
not
and
obviously,
because
an
AK
has
no
data
in
it.
If
you
could
say
it's
got
size
zero,
but
it
hasn't.
It's
actually
got
a
header,
and
so
what
this
does
is
effectively
say.
Well,
let's,
let's
make
this:
the
proportion
of
scaling
be
the
size
of
the
act
packet
divided
by
the
size
of
a
full-scale
packet
full-size
packet,
including
the
headers,
at
both
cases,
and
so
with
accuracy,
and
you
can
do
this.
H
Quite
simply,
you
you've
got
to
count
all
the
packets,
but
you
have
to
guess
how
big
your
packet
headers
are,
unless
you
can
actually
know
it
through
the
operating
system.
Somehow
it's
only
gets
particular
problem
guessing
it
just
gets
40
bytes
or
something,
and
it
doesn't
matter
too
much
if
you're
wrong,
because
this
is
as
fairly
arbitrary
scaling
anyway.
It
just
means
that
if
you're
getting
a
mixture
of
some
pure
acts
with
C
marks
and
some
full
sized
sequence
with
C
marks,
you
take
the
total
of
all
all
those
over
a
round-trip
time.
H
And
therefore,
if
you're
sending
a
lot
of
pure
acts,
they
won't
make
much
difference
to
your
congestion
window,
but
and
if
you're
sending
a
lot
of
full-size
segments
and
you
get
one
or
two
pure
acts,
also
marked
in
there
they'll
make
you
know,
the
marks
on
the
bigger
packets
will
make
obviously
a
lot
more
difference,
but
it
will
eventually,
oh
and
if
you're,
sending
a
pure
act
stream.
That
alone.
H
H
You'll,
be
you've
been
very
unfair
on
yourself,
because
if
you
then
start
sending
data
again
in
particularly
in
the
case
where
it's
not
you
you've,
just
massively
reduced
your
congestion
window
in
though
you
weren't
sending
anything
apart
from
your
ads
because
of
something
else
is
congestion,
and
so
the
idea
is
to
scale
it
by
the
amount
of
data
you're
sending
and
if
everyone's
doing
that,
it
should
then
end
up
that
you
don't
hurt
yourself.
Basically
you're
you're,
shooting
yourself
in
the
foot.
K
H
That's
that's
why
the
previous
slide
said
this
all
depends
on
which
congestion
control
scheme
you're
in-
and
this
is
a
suggest
as
a
guideline-
that
the
particular
congestion
control
algorithm
that
will
make
those
trade-offs,
but
we
can't
make
those
trade-offs
for
them
all.
So
this
is
I
would
say
this
because
it's
related
to
accuracy
and
it's
more
likely
to
be
for
data
center
TCP
type
things
than
your
traditional
big,
big
response
to
one
yeah.
K
H
H
H
Your
cwe
fee
would
be
reduced
faster
than
if
you
weren't
getting
see
you
on
your
XP.
So
this
this
is
only
it's
just
sort
of
enough
to
get
to
get
you
through
some.
It's
gives
you
some
ideas
on
how
you
would
do
this,
but
it's
all
going
to
be
done
properly
in
a
congestion
control,
craft
and
so
I
can
see
me
look
so
happy
about
this.
I
think
that
the
size
of
a
mouth
about
three
centimeters
below
the
middle.
I
And
I
understand
that
this
is
like
not
any
kind
of
normative
requirements
or
whatever,
and
this
is
part
of
the
experiment
and
providing
some
hints
just
make
some
sense
but
like
this
is
very
very
close
to
data
center
TCP
style
reaction
right.
So
it's
like
it's
quite
specific
case
after
all,
so
I'm
not
sure.
If
that's
the
case,
that
I
would
recommend
in
general,
I
don't
know:
okay,
I.
I
The
answer
of
the
experiment
could
also
be
that
you
just
want
to
adopt
the
cwv
stuff
slightly
and
take
a
mise
en
into
account
or
the
ends
that
could
be.
You
just
have,
because
the
case
is
where
you
hurt
yourself
are
very
rarely
and
they
don't
really
hurt,
or
whatever
I
mean
that's
part
of
the
experiment.
I
guess.
F
So
am
I
understanding
correctly
that
in
this
case
three
you
do
nothing,
it
will
end
down.
You
still
reduce
your
congestion
with
nobody
has
no
effect
because
you're
not
sending
em
you
guys,
but
that's
that's
the
case
where
I
see
that
you
should
do
something
because
you
are,
you
have
a
consistent
back.
H
H
To
have
do
something
that
will
that
will
deal
deal
with
this
reasonably
and
ultimately,
if
you
are
in
sat
scenario,
three
and
your
own
extreme,
and
maybe
other
extremes
are
the
thing-
that's
congesting
this
link.
Ultimately,
if,
if
your
lack
of
action
causes
that
congestion
to
get
worse,
the
ecn
will
move
to
drop
in
the
network
and
it
will
start
thinning
out
the
extreme
and
that
that
will
sort
it
out.
But
but
then
it
should.
F
I'm
not
bending
in
about
this
a
lot,
but
the
case
three
would
be
the
only
case
where
I
would
react
and
I
wouldn't
bother
reacting
in
one
one
or
two,
because
they
are
sorted
out.
Otherwise,
you
know
in
other
ways
later
on
Andy
and
now,
if
you
try
to
kind
of
react,
is
because
very
very
complicated
and
you
would
have
side
effects
in
different
scenarios
and
maybe.
H
F
H
H
H
H
F
H
H
H
It
deals
reasonably
with
all
three
cases
in
one
and
two,
you
get
a
congestion
window
action,
let's
scale
down
a
lot
by
the
fact
that
you're
getting
good
gesture,
not
markings
on
very
small
packets.
It
deals
with
1
&
3
in
that
you
get
a
congestion
window
reduction,
but
it
has
no
effect
on
the
pure
acts,
which
is
maybe
what
you
want
and
the
acts
ec1
would
deal
with
the
extreme
but
I
think
just
act
thinning
by
lost
deals
with
that
sufficiently,
but
it's
all
experimental
that
I
don't
think.
It's
been
enough.
H
H
Can
use
that
word,
but
so
I'm
trying
to
do
something
as
simple
as
possible
or
at
least
to
suggest
something
as
simple
as
possible
for
the
axis
advocate
ecn
case
and
I
would
hope
that
gives
the
hints
people
to
do
something
as
simple
as
possible
in
the
other
cases
as
well
and
as
simple
as
possible
might
be
nothing
but
I
can't
say
that
in
a
draft
right
next,
ok!
So
moving
on
the
you
have
to
read
this
title
like
your
own
football
score.
H
Reading
a
football
school
network,
mangling
Neil's
server,
mangling
84!
Actually
it's
more
like
a
rugby
score.
Isn't
it
so
I
happened
to
cross
this
paper
and
then
discovered
it
was
written
by
mayor
from
this
gym
and
he
in
and
so
on,
from
Brian
Trammell,
and
it
got
me
very
embarrassed
right
because
we've
been
doing
measurement
tests
and
and
serves
me
of
her
for
some
years
and
then
all
the
82
percent
of
servers
that
now
support
easy
end.
So
really
cool
they've
got
up
to
82
percent,
but
84
percent
of
them
were
disabled.
H
Right
now,
how
did
we
never
discover
this,
partly
because
I
thought
that
I
thought
the
student
understood
that
this
is
what
they
were
meant
to
be
measuring
and
in
their
report,
I
thought
they
had
measured
it,
but
they
hadn't
and
Nia
Mayer
confirms
also
that
in
all
their
previous
experiments
they
haven't
measured
the
Saudis
response,
they've
only
measured
the
network's
response.
In
the
same
way
that
I
only
check
the
network
traversal.
We
have
not
looked
at
whether
it
what
the
server.
L
H
With
it
really
embarrassing
all
right
anyway,
I
traced
it
at
least
obviously
I
can't
know.
Well,
we
could
know
what
the
operating
systems
of
all
these
were
with
a
lot
more
work,
but
assuming
this
is
limits,
I
traced
it
to
a
patch
applied
in
May
2012,
and
there
was
that
comment
on
top
of
the
patch
which
says
it
all.
Really,
it
says
I've
seen
3168.
There
aren't
many
comments
in
limits
actually,
but
it
says
syn
packets
must
not
have
the
easy,
easy
T
bits
set.
H
So
how
do
we
get
this
fixed?
Well,
let's
just
have
a
bit
of
irony
first
in
in
our
measurements
of
network
mangling,
which,
unfortunately
were
the
only
ones
we
did.
Who
actually
got
to
the
point
now
where
we
found
zero
occurrences
of
this
problem?
We're
a
naughty
CN,
not
not
easy
T
sin
becomes
and
ect
scene
or
C.
Easy
can't
see
it
anywhere,
but
Linux
is
testing
in
case
it's
there
and
turning
off
in
four
percent
of
cases,
all
ECM.
H
If
it
is
there,
so
it's
a
rather
drastic
action
based
on
one
ended,
inference
of
a
cocaine,
Co
point
transition
and
it's
also
silent.
There's
no
logging
at
the
problem
to
get
it
fixed
or
anything
like
that.
So
I
would
recommend,
or
we
recommend
him
in
the
draft
now,
which
means
we're
saying
the
ITF
will
recommend
remove
that
test.
The
contrapposto
easy
end
test,
while
you're
deploying
accuracy
on
all
servers
and
that's
the.
Why
are
we
saying
this
in
a
draft?
H
H
H
You
could
also
recommend
that,
while
deploying
a
CN
plus
plus
on
servers
that
you
get
this
test
removed,
or
you
could
just
recommend
that
everyone
removes
it
particularly
given
the
network
doesn't
seem
to
be
doing
this
anymore,
that
that
could
be
safe,
but
I
assume.
The
reason
it
was
originally
put
in
Linux
was
because
they
were
worried
that
if
the
network
was
turning
zero
Sen
into
ECT
or
C
II,
then
your
EC
end
mechanism
wouldn't
work
once
you've
negotiated
it,
because
it's
probably
some
diffserv
bug
in
in
some
bit
of
network
equipment
yeah.
H
I
Could
have
indigent,
and
so
there
was
the
case
where
the
EC
n
bits
were
overwritten
to
be
said
to
seee,
which
actually
had
the
condiment
that,
like
every
packet,
was
conditional
and
your
throughput
goes
down
to
zero
right,
and
so
that's
why
they
had
to
fall
back.
I
was
also
surprised
to
see
that
they
had
the
same
fallback
for
ECT
zero,
which
seems
to
be
wrong.
Yeah
yeah.
H
Yeah
and
and
partly
through
Apple's
deployment
and
fixing
at
that
problem,
this
does
seem
to
have
gone
away
very
quickly
and
and
to
zero
well
from
as
much
as
we've
measured.
You
know,
and
it's
actually
also
difficult
to
measure
when,
when
we
measured
it
on
mobile
networks,
we
have
a
lot
of
bleaching
of
knees
in
code
point
and
therefore,
if
it
wasn't
bleached,
you
might
find
the
things
happening
behind
that
bleaching.
So
we
don't
actually
know
you
know
there
may
be
two
effects
on
the
same
path,
one
of
which
is
hidden
by
the
first
one.
H
B
H
H
Okay,
but
this
is
just
how
do
you
get?
How
do
we
get
the
code?
How
do
we
get
the
code
fixed?
This
slide
in
fact,
number
three.
There
is
Victor
space,
which
is
two
slides
on,
so
the
next
slide
is
a
work
round
by
client.
Caching
and
all
this
is
that
was
actually
already
in
this
draft,
because
you
have
to
do
this
caching
anyway,
for
this
is
essentially
the
test
for,
where
the
other
end
support
accuracy,
n
and
if
it
doesn't,
then
don't
send
your
ECT
in
future
to
it
on
the
sin
and
theirs.
H
Now
let
there's
two
possible
cases,
both
both
identical
diagrams
except
everything's,
flipped
in
one
or
the
other.
In
other
words,
the
default
is
set
easy
on
the
sin
and
turn
it
off.
If
it's
cash
than
this
ones,
don't
set
easy
on
sin
and
turn
it
on
if
it,
if
it's
okay,
you
know,
that's
essentially
what
these
two
diagrams,
so
we're
still
recommending
that
as
the
shirt
in
the
draft.
H
This
is
all
in
the
draft.
It
was
just
an
attempt
to
explain
it
reasonably
quickly.
I
I
think
the
only
thing
to
say
about
this
is
that
probably
the
most
significant
aspect
is
to
decide
between.
These
is
what
happens
to
if
you
don't
want
to
have
too
big
a
cache,
particularly
when
you've
got
this
large
percentage.
When
you
start
up
servers
that
are
problematic,
they
like
to
have
quite
a
large
cache
to
catch
them
all.
H
I
H
Okay,
next,
alright,
so
mixing
respects
3168.
This
is
this.
This
was
I,
wouldn't
I,
wouldn't
quite
say
as
the
problem.
The
problem
is
the
silence
on
what
to
do
if
you
get
one
so
the
host
must
not
said
he
city
on
the
scene
or
a
cynic.
A
t-38
11
which
David
used
as
an
update
to
this
has
added,
unless
otherwise
specified
by
Maryland
RFC,
which
is
what
this
draft
is,
and
that
was
what
he
was
intended
for.
H
So
that's
a
bit
opaque
for
the
limits
community
to
determine
that
that
first
statement
is
no
longer
true,
but
and
I
did
actually
argue
with
David
at
the
time
that
we
ought
to
make
this
clearer,
but
they
didn't
want
to
make
everything
non-compliant
in
one
swoop.
However,
we
could,
we
could
have
said
it
would
have
been
nice
to
have
something
in
83
11
that
had
a
clear
statement
of
at
least
assured.
You
know
it
was.
It
was
clear,
so
it's
a
bit
difficult
to
argue
with
the
Linux
community.
H
You
haven't
really
got
a
clear
statement
because
on
that
third
bullet,
what
does
the
server
do
if
it
received
nonzero,
AC
and
I'm?
A
sin
3168
there's
nothing
editorial
Evans's,
nothing
if
311
does
say
what
a
middle
box
shouldn't
do
say
what
a
server.
H
H
So
what
I've
put
in
this
draft
is
in
order
for
this
experiment
to
be
useful.
The
following
requirements
follow
from
echo
311,
in
other
words,
I
mean
we
now
have
an
experimental
draft,
so
this
experimental
draft
says
that
any
TCP
implementation
should
accept
receipt
of
any
valid
TCP
control
packet
or
retransmission,
irrespective
of
its
RPM
field.
G
G
What
what's
bugging
me
is
the
three
words
that
basically
say
these
requirements
are
in
our
follow
from
ie
are
implied
by
eighty
three
eleven
I.
Don't
think
those
three
words
apart
right
fall
off,
maybe
from
our
283
eleven
I
think
they
I
think
they're
in
here.
I
think
the
requirements
are
inherent
the
structure,
the
experiment
right.
H
E
I
D
H
G
This
is
to
complain
about
between
gory
and
I.
Think
what
we're
telling
you
to
do
is
strike
the
words
follow
from
83,
11
and
say
something
something
like
are
necessary,
because
eighty
11
doesn't
imply
doesn't
imply
any
as
it
opens
up
the
the
opportunity
to
do
so,
and
your
search
that
you
need
these
recurrence
I
think
is
correct,
but
the
assertions
should
be
grounded
in
the
structure
of
the
experiment,
not
the
window
that
you
feel
of
them
open
for
you
and
response
them
to
tamir's.
G
H
B
B
H
B
Patch,
if
there's
a
patch
comment
that
says,
3168
says
this.
This
is
why
the
code
is
here.
So
this
is
frankly,
you
can
try
and
if
you
fail
you're
like
yeah
I,
think
we
should
tell
them.
We
need
to
actually
update
3-1
or
83
11
to
specify
that
this
is
the
way
it
should
be,
but
you
can
try
first
with
just
this
text
and
see
if
they
can
accept
the
patch,
but
if
they
want,
we
might
have
to
do
it.
This
again,.
E
H
Words,
nothing
else
of
of
line
right
next,
okay,
so
then
I
mean
having
written
that
into
the
draft.
We
realized
that
the
draft
now
said
in
the
number
of
places.
This
is
a
sender,
oni-chan
change,
he
empathizes
a
sender,
only
change
and
there
we
started
to
talk
about
receiver
stuff,
and
so
what
we
realized
we
actually
meant
was.
This
is
a
sender
only
deployment
and
we
haven't.
In
other
words,
you
can
deploy
this
sender
only,
and
there
are
these
guidelines
for
when
you
receive
these
packets
of
what
to
do.
But
it
doesn't
stop.
H
You
said
deploying
it
as
a
sender.
You
don't
have
to
negotiate
it
with
the
receiver,
all
right,
and
so
they
altered
all
that
to
say,
send
their
only
deployment
rather
than
send
the
only
send
or
any
change,
and
we've
got
a
new
section
where
we're
collective
together
all
the
things
you
do
on
the
receiving
end
of
these
packets,
rather
than
having
it
sort
of
hidden
in
the
section
that
says
this
is
all
about
senders.
H
Because
there's
already
stuff
in
there
on
you
know
packet,
validation,
checks,
I,
think
it's
50
to
98
or
something
the
one
that
there's
only
checking
for
dos
attacks
on
packets
and
stuff,
like
that,
there's
already
all
those
sort
of
recommendations
in
as
well
that
all
goes
with
it
next,
okay,
so
we
hopefully
really
have
finished
I
mean
there's,
obviously
some
work
to
do
outside
of
the
ITF
now
but
closed
off.
Those
four
open
issues:
we've
structured,
the
writing,
there's
a
huge
amount
of
edits
in
this.
This
latest
draft
got
some
suggestions
on
response.
H
See
II
got
the
the
stuff
about
the
the
tests
in
there
and
I've
tried
to
keep
the
normative
part
of
drive
pretty
short
and
then
moves
out
even
more
into
the
informative
part
of
all
the
rationale.
So
to
know
what
to
do.
You've
just
got
to
read
a
fairly
short
now
and
so
I
think
we're
at
the
point
now
where
we
can't
go
to
working-class
school
and
it's
incredible
how
difficult
it
is
to
just
serve
it,
become
a
bit
or
even
one
bit
a.
A
H
H
A
H
There's
a
there's
a
wording,
change
on
the
on
this
particular
thing.
On
a
311,
there
may
be
some
discontent
about
the
the
act:
congestion
control
there,
the
two
bits,
but
I,
don't
is
there
anyone
actually
wanting
us
to
change
their
ingesting,
the
girls,
stuff,
I?
Guess
people
maybe
need
have
people
read
it?
A
H
I
I
H
H
M
H
M
H
H
The
the
sender
or
receiver
never
knows
where
there's
actually
been
a
loss
and
obviously
the
classic
TCP
you
have
the
3G
back
rule
with
rack.
The
idea
is
either
a
fraction
of
the
round-trip
time
or
you
actually
measure
in
the
when
you've
got
an
idea
of
the
reordering
window.
That
you're
seeing
you
measure
it
and
you
determine
what
the
reordering
window
is
and
I'll
call
that
I'll
measure
that
as
a
fraction
of
the
round-trip
time
and
call
that
epsilon,
which
Greek
people
people
would
call
it
all
mathematicians.
H
H
You
know
if
it
was
75%,
90%
you're,
getting
close
to
two
round-trip
times,
so
it's
starting
to
quite
seriously
impact
or
delay.
As
long
as
the
epsilon
is
a
small
fraction,
you're
reasonably
okay,
but
you
don't
want
to
go
too
small.
Otherwise
you
start
increasing
spirits
but
pre
transmissions
because
you
start
getting
under
the
floor
of
Rio
being
in
the
network.
H
H
If
you
coddle
in
PI-
and
you
may
have
seen
some
demonstrations
in
the
ITF
of
applications
with
that
sort
of
very
predictable
low
delay,
high
bandwidth
applications
a
few
ITX
ago,
so
I
won't
go
into
the
architecture
of
l4s.
Just
to
say
that
it's
it's
a
protocol
change
on
the
ECM
field,
it
uses
the
ECT
one
code
point
to
say
this
is
an
l4
s
packet.
H
So
about
three
months
ago,
we
realized
there
was
an
opportunity
here.
So
we
added
a
fifth
requirement
in
in
the
l4
s
spec.
There
are
requirements
on
senders
on
what
they
must
do
in
order
to
have
the
right
to
set
this
code
point
because
essentially,
the
low
delay
in
l4
s
comes
from
the
sender's
behavior
and
the
decent
tcp,
like
congestion
control,
is
one
of
the
main
things
that
gives
you
this
very
low
delay
and
we
realized.
If
we
added
one
more
requirement,
this
fifth
requirement
that
you
must
not
detect
loss
in
units
of
packets.
H
We
would
get
a
very
important
gain
for
the
future,
and
so
this
talk
is
about
whether
that
is
a
sensible
idea
and
whether
there
are
implications.
So
what
it's
really
doing
is
its
release,
a
constraint
on
link
designers,
so
they
can
make
faster
linked,
but
it's
giving
you
a
bit
more
pain
in
the
in
the
in
the
sender,
in
that
you
have
to
do
this
rack
thing,
which
has
already
been
done
by
Microsoft
Apple
Google.
H
Or
the
permission
to
do
this
has
to
come
from
those
who
would
have
the
pain
of
implementing
it,
which
is
TCP,
erm
cat
or
all
the
congestion
controls,
and
then,
if
we
do
that,
I'm
now
going
to
explain
why
that
gives
you
benefits
in
the
link
end
of
the
architecture-
and
this
is
all
explained
in
that
appendix
of
the
draft.
If
you
want
to
go
and
read
it
next,.
H
So
the
idea
of
saying
must
not
in
the
l4s
specs,
you
must
not
count
in
in
your
three
new
pet
type
role
and
package.
You
must
do
it
in
time
is
so
that
anything
that
any
network
link
that's
easy
city,
one
at
the
IP
layer
can
know
that.
There's
nothing,
that's
counting
in
packets,
and
that
means
it
can
disable
all
the
effort
it's
putting
in
to
keeping
making
sure
that
the
packets
stay
in
order
in
the
network,
for
that
particular.
H
If
it's
got
a
queue
where
all
that
traffic
is,
and
in
this
case
L
for
s,
has
a
queue
where
all
that
ECT
one
chap
is
classified
into,
and
that
means
it
can
turn
off
resequencing
and
that
can
give
you
quite
significant
gains
in
a
radio
network.
What
happens?
Is
you
have
your
link
layer
retransmissions
going
on
and
that
the
receive
end
of
the
radio
link
in
LTE
and
Wi-Fi?
H
It
will
buffer
those
packets
they
have
got
through
until
it's
got
all
the
little
violet
retransmissions
through,
and
then
it
will
send
and
once
it's
not
got
a
hole
in
front
of
them.
So
you
get
a
head
of
line
blocking
because
of
that
need
to
keep
things
in
order
because
of
TCPS
need
to
ensure
that
not
more
than
three
packets
are
out
of
order.
So
you
can
turn
that
off.
You
can
do
remove
all
those
delays
from
your
radio
link
and
it's
not
only
radio
links
most
most
attempt
to
make
links
go
faster.
H
These
days
involve
spreading
your
your
capacity
over
a
number
of
bonded
channels
and
so
you're
running
over
multiple
frequencies
and
then
at
the
end
they
all
have
to
come
back
together,
and
so
they
will
have
to
wait
to
get
back
in
order.
It
just
makes
the
whole
thing
more
complicated.
So
if
you
just
release
things
as
they
come
at
least
think
where
things
is
packet,
and
so
this
must
not
means
that
you
can.
No,
you
can
turn
it
off.
Now.
We've
got
rack
in
TCP.
H
You
can't
turn
off
your
resequencing
because
of
course,
they're
still
away
old,
TCP
and
that
queue
as
well,
and
so
that's
going
to
take
you
know
decades
until
that's
removed
from
the
network.
So
the
idea
here
is:
we've
got
a
new
queue.
If
we
can
say
you
must
not
do
the
old
things
in
this
new
queue,
we
can
immediately.
B
H
H
So
here
you've
got
a
slow
packet
stream
and
here
you've
got
a
faster
196,
packets
per
round-trip
time,
verses,
12,
and
so
your
packets
obviously
take
a
shorter
time,
as
you
go
faster,
so
to
reject
do
packs,
becomes
a
smaller
and
smaller
amount
of
time
as
as
flows
get
faster,
and
so
that
means
that
all
the
engineering
you
have
to
do
to
design
a
link
has
to
be
done
in
a
smaller
and
smaller.
At
a
time.
H
Right
we're
down
to
seven
50
microseconds
it
then
that's
only
96
packets.
By
round-trip
time
you
go
10
100
times
faster
than
that
you're
down
in
the
single
microseconds,
and
so
on
that
you
have
to
keep
everything
in
order,
and
so,
if
you're
got
a
multi-channel
link,
you've
got
to
hold
everything
and
get
everything
back
in
order
to
that
level
of
precision.
H
E
A
Quick
question,
so
you
made
some
rack
here,
but
it's
actually
timer
base
because
work
sometimes
for
about
30.
H
H
Yeah
I
I
said
this
already
that,
with
a
radio
link,
you
can
just
send
from
the
straight
from
the
receiving
end
of
the
radio
link
without
the
resequencing
buffer
and
and
then
the
end
system.
You
know
the
TCP
receives
DAC
will
put
things
back
in
order,
rather
than
you
putting
it
back
in
order
in
the
network
first
and
that
that's
because
the
network
has
to
stop
TCP
thinking,
there's
been
a
loss
and
also.
H
Sorry
I'd
have
thought
in
my
head,
but
it's
gone
anyway
right.
So
next
thing
yeah.
So
this
this
is
the
discussion
point.
But
if
we
say
in
the
Alpha
aspects
and
potentially
in
quick
as
well,
although
maybe
the
the
horses
already
bolted
on
that
one,
because
then
you
could
say
all
UDP,
you
don't
have
to
put
into
order,
which
is
generally
assumed
at
the
moment
that.
B
D
N
H
G
J
K
H
K
K
H
K
L
So
it's
a
question
whatever,
but
I
like
your
torque
I
like
where
you're
going.
What
you
are
going
is
that
there
is
a
lot
of
crap
in
the
network.
That
is
there
because
there
was
a
lot
of
crap
in
the
transport,
and
so
if
we
gave
the
network
guys
confidence
that
there
is
no
more
crap
in
the
transport,
they
might
remove
the
crap
in
the
network
and
we
might
have
better
network
and
that's
nice,
that's
very
nice.
Now.
The
the
point
is
that
it
will
take
some
convincing
and
takes
some
time.
L
Okay,
suppose
that
I
am
at
the
cable
labs,
for
example,
deciding
whether
I
want
move
the
the
resequencing
in
my
packets
and
and
this
Bob
he
likes
me
hey.
If
that
bit
is
said,
I
mean
you
can
or
if
it
is
UDP
you
can
and
you
would
have
an
old
crusty
guy.
That
said,
yeah
on
this
transport
guys
are
both
of
us.
I
mean
if
you
do,
that,
it's
gonna
like
so.
How
long
will
it
take?
How
long
will
that
process
take
so
the
old
crusty
guys
acknowledge
that?
Maybe
they
don't
need
to
do
that.
L
L
L
L
H
H
L
D
D
H
D
I
just
wanted
to
make
a
comment
that
you're
you're
focusing
very
much
on
rec,
but
even
with
packet,
counting
when
you
have
reordering
the
vastness,
which
is
a
dynamic
and
yet
dynamically
adjust
to
what
you're.
Seeing
what
you're
observing
in
the
network.
That's
basically
almost
as
good
right.
H
H
H
By
their
by
the
ha'la'tha,
so
if
we
and
we
had
a
name,
email
discussion
on
this
on
the
TCP
M,
if
I
think
it
was
entitled
vicious
circle
or
virtuous
circle,
or
something
like
that,
and
and
it's
that,
how
can
we
make
it?
So
it's
clear
how
much
time
we're
giving
them
and
not
just
saying
just
take
the
brakes
off.
Okay,
yeah.
E
So
Gophers,
just
on
the
RTT
of
rate
for
whatever,
and
if
we
get
past
changes
which
are
very
significant,
then
we
will
get
significant
retransmissions
that
we
don't
watch
out.
So
there
are
trade-offs
in
this
space
and
I
want
to
be
careful
that
we
somehow
just
think
through
this,
especially
we've
got
radial
resource
management,
etc.
Also,
choosing
timing
intervals
that
can
become
very
variable.
H
The
idea
was,
we
still
use
the
right
mechanisms
for
finding
out
what
the
reordering
window
is
and
adapt
into
it
as
it
changes
and
and
whatever.
But
this
is
just
for
the
the
real
problem
here
is:
how
do
you
get
started
because
at
the
moment
rack
starts
with
three
new
packs.
It
does
and
because
that's
convenient
and
I'm
saying.
D
H
B
Michael
Lorenzen,
so
I
would
just
like
to
lift
this
discussion
if
we're
relaxing,
they
need
to
order
packets
in
a
five
tuple
flow.
This
should
be
done
at
a
strategic
level
as
a
long-term
objective
to
get
third
protocols,
and
this
is
should
be
the
fundamental
change
in
how
the
Internet
and
they
its
applications
work.
I
love
it.
If
we
can
do
this
because
it
would
help
so
much
for
so
many
mediums,
radio
or
RF,
you
know
the
DOCSIS
whatever
it
would
help
hugely,
but
it
needs
to
be.
B
H
I
think
right,
radio
speed
deployed
very
fast
in
that
in
the
TCPS,
so
it
been
implemented
very
fast.
Now
the
deployment
I
don't
know
that
that
and
the
thing
about
l4s
is
it
is
I've
often
call
it
a
incrementally
deployable
clean
slate?
It's
it's
something
that
gives
you
that
opportunity
to
get
rid
of
some
of
your
mistakes
from
the
past
from
day
one
and
clear
out
the
clutter.
You
know
right.
B
So
this
is
so.
This
is
not
just
like
Bob
standing
here
for
fries
Pat.
If
we're
doing
this,
we
it
should
be
done
like
this
Islam
and
I
know
the
new
truth
and
anything
we
do
going
forward
is
learned
it
like
that
and
they
need
to
be
agreed
by.
Why
did
it
in
a
white
audience
that
this
is
the
way
we're
going?
I
can't
have
just
a
sliver
of
the
traffic,
be
like
this.
It
needs
to
be
large
portion.
H
This
is
what
David
and
gory
been
saying
to
me
that
this
needs
to
be
agreed
by
wide
audience,
and
my
take
on
that
is
them.
The
people
that
most
need
to
grit
are
the
ones
that
will
get
more
of
the
pain,
which
is
the
transport
area,
the
ones
that
are
getting
the
benefit
from
it.
Yes,
we
can
assume
that
they
will
agree.
M
M
Today,
based
so
because,
then
we
can
somehow
sink
off
layer,
two
suspicions,
windows
and
on
changing
the
whole
thing
from
a
getting
wing
of
layer
to
retransmissions.
We
have
to
bear
in
mind
that
layer
2
packets,
a
smaller
than
MSS
packets.
So
basically,
we
have
some
reduce
optimization
and
we
don't
resent
all
TCP
packets
over
the
radio
link.
So
there's
also
some
savings
in
that
respect.