►
From YouTube: IETF102-TCPM-20180717-0930
Description
TCPM meeting session at IETF102
2018/07/17 0930
https://datatracker.ietf.org/meeting/102/proceedings/
C
Shall
we
start?
Okay?
Okay!
Is
that
so
this
is
TCP
of
meeting
TCP
maintenance
and
my
night
station
barking
Lou.
Please
make
sure
you
are
in
the
right
room
and.
C
This
is
user
load
well
and
I.
Think
you
already
aware
of
this
one,
but
just
in
case
by
participating
in
the
idea.
You
agree
to
follow
IETF
process
and
policies
and
if
you
have
any
concerns
about
it,
please
hello
check
is
not
well
very
carefree
and
you
can
find
the
same
content
on
IETF
website
go
beyond
and
we
have
no
pay
cuts.
C
B
C
Thank
you
so
much,
and
this
is
just
a
usual
reminder
when
you
join
in
the
meeting
when
you
speak
up
the
microphone.
Please
state
your
name,
so
that's
not
nothing.
I
can
track
your
name
and
then,
when
you
submit
your
internet
draft
to
this
working
group,
please
make
sure
included
eCPM
in
your
draft
name,
so
that
cheers
can
track
the
status
of
your
draft
okay.
C
So
this
is
a
today's
agenda.
We
are
starting
from
working
group
status
from
chairs
and
then
we
have
true
working
group
items
presentation.
First,
a
you
tuned,
we'll
talk
about
rock
draft
update
and
after
this
Bob
will
talk
about
accurate
issue
and
draft
a
plate.
And
after
this
one
we
had
two
individual
presentations.
C
Yoshi
will
talk
about
his
new
internet
draft
and
then
we
have
another
individual
presentation,
making
TCP
faster
and
cheaper
application,
and
this
talk
will
be
done
by
soy
and
you
chilli
and
after
this
we
have
other
draws
two
presentations,
TCP
constrained
and
no
network
and
axure
destructs
is
a
working
group.
I
am
in
my
working
group,
but
no
this
draft,
it's
related
to
TCP.
C
So
a
key
CPM
chair
and
Arabic
working
group
chair
agreed
to
run
working
group
rustical
on
both
TCP
M
and
a
week
working
group,
so
this
Prudential
patient
gives
some
kind
of
heads
up
before
this
traffic
go
to
working
with
rascal.
So
this
rough
is
not
running
or
working
rastko
right
now,
but
beforehand
we
want
to
give
you
guys
a
heads
up.
So
if
you
are
interested
in
this
rough,
if
you
want
to
say
something
on
this
earth,
please
read
this
off
and
the
property
of
feedback
on
this
rough.
C
That's
why
we
have
this
presentation
today,
any
question
to
the
agenda.
Any
comment:
questions:
okay,
moving
on:
okay,
let's
start
from
status
of
document
and
since
rust,
ITF
we
have
finished
one
draft
and
we
have
submitted
alternative
back
of
alternative
back
of
draft
alternative
back
of
each
intro
to
the
idea,
investment
and
thanks
so
much
for
your
cooperation.
C
And
then
we
have
six
working
with
active,
active
working
group
items
and
accurate,
EEG
and
draft
this
drug
has
been
updated
recently,
and
then
this
is
one
of
the
today's
agenda
and
next
one
is
a
TCM
rock
drop.
This
drop
has
been
updated
and
then
this
is
also
one
of
the
day's
agenda
and
795
is
wrapped.
This
draft
has
been
updated.
C
Unfortunately,
that
was
I
cannot
come
to
this
meeting,
but
we
have
received
a
slide
from
the
AUSA's,
so
we
will
talk
about
it
on
behalf
of
the
authors
and
oh,
no,
no,
no
yeah
and
then
T
CPM
converter
to
draft,
and
this
nurse
has
been
updated
recently.
But
according
to
the
author,
it's
a
minor
update
and
then
eto
draft.
There
is
no
update
software.
C
According
to
the
author,
they
are
doing
some
ongoing
experiment
on
this
drug
and
restaurant
needs
generalized
Asian
and
unfortunately
this
raft
has
been
expired
for
now,
so
we
expect
also
to
update
traffic
very
soon,
that's
the
status
of
the
working
group
item.
Any
question
comment
about
octave
documents.
C
Okay,
maybe
on
okay,
so
this
is
a
slide
for
793
be
status,
so
I
will
stick
about
it
on
behalf
of
the
AUSA.
So
thank
you
and
two
weeks
ago,
division
ten
has
been
submitted
and
this
is
updated
related
to
tiff
syrup
and
security
consideration
and
then
thanks
for
very
good
feedback
from
you
change,
Joe
Cory.
Thank
you
so
much
and
then
so.
Limiting
remaining
item
leads
updates
some
conformance
checklist
at
the
end
of
the
document.
Just
like
RFC
1122,
that's
a
prong,
and
this
will
be
done
in
dry
and
okay
go
Jana.
D
D
C
Good
question,
and
so
we
have
talked
about
this
setup
in
the
last
ITF
and
then
this
drug
has
been
expired
and
then
one
from
maybe
you
know
we
can
find
some
kind
of
editor,
but
some
editor
food
flavor
will
be
editor.
It
will
be
very,
you,
know,
tough
job
and
we
have
to
come
exchange
also
email
to
other
than
that.
You
have
to
hello
leads
to
the
conclusion,
so
it
needs
stronger
power.
So
if
someone
want
to
brunt
here
for
the
editor,
they'll
be
very
quick,
so
agenda,
you
want
the
frontier
so.
D
E
D
C
B
D
A
F
C
Okay
and
and
there
any
question
about
it-
793
based
status,
so
I've,
okay,
if
this
is
in
the
middle,
so
I
hadn't
finished
it
yet,
and
so
this
is
793
is
very
important
document
for
this
working
group.
As
you
know,
this
is
793
B's,
so
I
could
be
more
important
for
any
idea
or
any
time.
This
is
793
piece
again
and
I
know.
C
This
is
very,
very
wrong
job
100
pages
right,
but
and
then
we
already
get
some
nice
review
from
the
people
from
time
to
time
all
right,
it's
small
subset
of
people,
but
if
it
would
be
really
good
if
we
can
get
brown
the
rebels,
no
I
know
this
is
long
draught,
but
if
you
can
just
read
one
section
and
the
provide
feedback
that
would
be
really
really
great
and
then
I
cannot
say
exact
time
schedule,
but
I
think
this
document
is
step
by
step
approach
to
me.
The
working
group
rastko.
C
So
before
we
start
running
walking
abreast
Co.
We
would
like
to
make
sure
there
is
nothing
red.
So
if
you
have
any
concerns
about
the
current
status
of
the
draft
and
if
you
think
something
should
be
done
before
the
Viking
rustical
Y,
you
have
some
serious
concern.
Some
part
of
the
draft.
Please
speak
up
please
and
and
speak
up
or
send
a
message
on
the
mailing
list.
That
will
be
very
helpful
for
us
any
comments.
Some
questions.
C
G
Hi,
everyone
I'm
sure
to
give
an
update
of
the
new
rack
draft.
My
name
is
yu
Chun
and
this
is
a
work
Jony
with
my
colleague,
Neil
Nandita
and
Priya
at
Google
thanks,
like
least
so
I
think
most
people
here,
FM
always
wrap
just
in
case.
Your
first
time
you
hear
about
it
rock
in
the
simple
way
is
its
is
trying
to
detect
loss
efficiently,
while
tolerating
a
small
degree
of
reordering
and
very
conceptually
you
can
think
of
every
packet.
G
You
send
you
on
the
timer
with
it
and
when
the
timer
fires
you,
we
transmit
the
packet.
So
how
long
do
you
arm
the
timer?
That's
the
tricky
that
the
parlor
rat
does
is
that
it
uses
the
mostly
recent
match
or
the
RTT,
and
you
can
imagine
every
act
comes
back.
You
update
all
the
timers,
that's
for
all
the
pending
packet.
That
have
been
said,
but
not
acknowledged.
G
G
When
you
get
redo
back
line,
is
sort
of
you
get
three
RTT
measurement,
and
then
you
just
send
the
add
packets
by
the
way-
and
it's
kind
of
like,
are
you
trying
to
repair
that
within
a
round-trip,
which
is
what
Rack
is
trying
to
do,
except
he
waits
for
a
not
the
timer
is
on
so
that
it's
the
time
now
is
an
RTT
plus
a
snow.
We
are
doing
window
to
tolerate
the
reordering.
G
Next
slide,
please
so
in
the
last
meeting
the
meant
excursion
is
that
there
eats
lack
of
design
rationale
of
the
rack
reordering
window,
because
the
autre
is
just
as
okay.
A
quarter
of
a
TT
empirically
tested
seem
to
work
fine,
but
the
concern
is
that,
well,
how
do
you
detect
the
order?
And
now?
How
do
you
really
handle
reordered
are
you
is?
G
Are
we
going
to
make
TCP
so
tolerant
to
reordering
that
now
the
network
gears
will
start
with
auditing
packets
to
help
or
send
one
packet
over
the
moon
and
the
other
over
your
local
area
network?
So
first
thing
first,
is
that
we
need
to
define
how
we
ordering
can
be
detected,
because
the
previous
wrap
doesn't
really
talk
about
it.
He
just
sites,
you
can
implement
some
out
ways,
and
here
is
the
algorithm
in
the
new
draft.
G
If
it's
below
the
sacs
higher
sacs
sequence,
which
we
call
the
FAQ
for
ACK,
then
this
sequence
has
never
been
we
transmitted,
and
then
this
is
clearly
that
some
reordering
has
happened
because
the
packets
were
delivered
out
of
sequence
order.
And
then
you
simply
remember
that
okay
I
have
seen
some
reordering.
We
don't
measure
the
reordering
de.
We
simply
remember
this
incident
in
a
variable
called
RAC.
We
Ord
in
addition
to
that.
G
If
the
connections
support
T
sac,
then
you
can
further
use
that
to
detect
reordering,
because
if
the
packet
has
been
spurious
to
me,
we
transmitted
the
previous
technique,
wouldn't
work
because
you
weren't
sure
if
the
sack
is
sacking,
the
original
one
or
the
retransmitted
sequence.
But
a
DISA
clearly
indicates
that
hey,
if
you
have
retransmitted
packet
and
also
I,
got
a
de
sac,
that's
acting
this
sequence.
That
means
this
retransmission
is
spurious
and
you
wouldn't
have
done
that.
G
The
original
one
has
been
delivered
and
we
start
you
can
say:
okay,
that's
also
reordering,
so
these
are
covered.
The
case
were
you,
they
say:
you're
reordering
window
is
small,
and
then
you
spoolie
fire,
a
retransmission,
and
so
the
sack
sequence
doesn't
give
you
much
a
clear
picture
of
whether
this
reordering
or
not.
But
if
the
connection
is
support,
a
sack
you
can
detect
that
and
the
good
news
is
that
the
major
stacks
or
support
a
sack,
Windows,
iOS
and
Linux,
or
support
de
sac
next
slide.
Please.
So
now
we
can
detect
reordering.
G
We
put
this
design
rationale
for
rack
wielding
tolerance,
and
first
thing:
first,
is
that
today,
if
you
just
reorder
packets
us,
however,
you
like
it,
you
know
our
sending
over
and
different
paths
between
source
and
the
sink.
Then
the
actually
caused
a
lot
of
problem,
despite
if
you,
whether
you
use
rack
or
not,
first,
the
biggest
problem
is
on
the
whole
stack.
G
That
is
a
very
high
CPU
cost,
because
now
the
trro
that
receive
receiver
flow
wouldn't
work
because
typically
the
receiver
all
flow
at
the
host
week
was
receiving
continuous
sequence
of
packets,
of
continuous
sequence,
and
on
top
of
that
now,
since
you
don't
send
these
big
jumbo
frames
to
the
receiver,
TCP
stack-
and
these
are
all
out
of
order
packet.
The
TCP
star
requires
you
to
send
immediate
act
for
every
one
of
them,
so
they
say
you
send
n
packets.
They
are
all
completely
permutated
and
reordered.
G
Then
you
end
up
sending
an
X
instead
of
one
in
the
typical
case.
So
this
also
adds
pressure
to
the
sender,
because
now
the
number
of
a
see
need
to
process
is
a
lot
higher
and
in
our
test.
If
you
do
this
in
and
say
you
know,
they
ask
with
extremely
short
et
and
very
high
penguins
over
1
Gbps.
You
see
this
very
significant
CPU
increase.
G
So
that's
the
first
problem
with
you
know
highly.
We
ordered
Network,
the
other
one
is
congestion,
control
right.
If
you
send
packet
over
very
disjoint
pass
and
most
TCP
congestion
control
assumes
that
the
feedback
you
get
is
from
the
same
bottleneck,
and
in
this
case
it
is
from
two
different
bottlenecks.
G
You
can
always
wait
longer
to
tow
it
very
high
reordering.
If
you
know
that
the
network
I
marry,
we
order
is
their
loss,
but
the
consequence
is
that
you
know.
If
you
do
have
packet
loss,
then
you
will
wait
longer
to
recover
the
packets
and
it's
always
sort
of
a
dilemma.
For
how
long
do
you
set
the
reordering
window?
You
can
never
get
the
best
we
all
being
tolerant
and
the
best
loss
recovery
latency.
G
So,
with
all
that
issues,
one
rack
is
trying
to
do
is
to
tolerate
small
degree
of
reordering
and
slightly
diverse
paths
and
the
most
common
sin
is
the
router
parallelism
that,
within
a
hub
between
the
ingress
and
the
egress
poor,
there
are
multiple
hubs
to
traverse.
This
is
not
necessarily
like
a
router
router.
It
could
be
a
load
balancer
or
even
just
you
know,
any
kind
of
intermediate
node
that
trying
to
forward
between
A
to
B
and
the
packet
may
Traverse,
slightly
diverse
paths
and
typically
the
traditional
approach.
G
Is
you
put
a
reordering
buffer
at
the
egress,
but
a
lot
of
time
on
it
wielding
buffer,
always
cost
memory?
And
you
have
to
a
lot
of
logic
in
the
know
to
deal
with
this.
So
some
sometimes
people
hate
to
do
that.
I
mean
a
lot
of
her
designer
hate
to
deal
with
DTP
reordering,
and
so
they
might
trying
to
do
something
about
it
to
a
limited
degree
that
you're
trying
to
keep
reordering
to
a
certain
extent.
But
out
of
that
day
they
would
just
let
packet
reorders.
G
The
other
common
scene
is
the
l2
retransmission
for
a
wireless,
or
you
know
the
wireless
trying
to
punch
up
in
TCP
packet.
They
send
it
over
and
maybe
three
of
them
out
of
n
because
of
channel
noise
couldn't
get
delivered.
So
the
link
layer
will
retransmit
on
this
three
packets
and
if
the
receiver,
NIC
or
driver
doesn't
handle
this
reordering
buffer.
Well,
then
they
would
try
to
upload
the
N
minus
three
frames
up
to
the
TCP
stack
and
there
that's
why
we
observe
weilding.
G
But
the
sort
of
the
thing
is
that
this
weilding
are
usually
very
small
in
terms
of
the
actual
RTT
and
that's
what
Rack
is
saying:
okay,
we
just
need
to
wait
a
fraction
of
RTT,
but
you
know
up
to
the
maximun
of
the
art
the
the
pass.
That's
the
rack
wielding
window.
He
cannot
handle
anything
beyond
that.
Next
slide,
please.
G
So
this
is
what's
written
in
the
traffic,
so
people
can
take
a
look
at
what
exactly
do
we
put
on
the
Monday
of
the
reordering
window?
Allison?
Initially,
when
you
start
the
rack,
we
order
in
window
the
extra
time
beyond
the
RTT.
You
want
to
wait
before
you,
you
know
we
transmit
a
packet
should
be
set
to
a
fraction
of
the
round-trip
time
and
by
round-trip
time
here.
I
don't
mean
you
know
the
past
one
or
past
two,
because
I
can
not
distinguish
how
many
pass
he
actually
travel.
G
G
So
initially
we
start
with
a
small
fraction
in
the
track.
We
just
make
it
a
quarter,
but
this
could
be
it's
always
bound
to
change.
As
you
know,
technology
evolves
and
until
the
wielding
odd
has
been
observed
using
our
wielding
or
detection,
we
will
continue
to
honor
this
classic
Street.
You
back
rule
for
two
reasons.
One
is
that
because
of
you
need
to
wait
a
little
bit
longer,
and
that
means
if
in
Tennis,
Center
or
setting
when
the
RTT
is
very
small,
say
20
microsecond
when
there
is
not
very.
G
E
Press
code
from
co.labs
I
wonder
whether
you
could
say
that
initial
reordering
window
I
mean
you've
said
this
is
good
to
start
with,
but
maybe
you
could
suggest
it
should
be
three
do
pick
the
maximum
of
three
duplex
or
or
a
certain
fraction
of
the
RT
T's,
so
that
in
the
future,
when
it
is
possible
to
time
precisely,
you
know,
you
haven't
stuck
this
stuff,
essentially
you're,
losing
a
lot
of
the
benefit
of
using
something.
That's
purely
time-based,
if
you
say
actually,
we
we
start
by
counting.
G
G
Yeah
that
work
and
the
sir
war
is
that
we
don't
want
to
just
have
a
star
with
a
fraction,
and
then
you
know
statically
set
this
window,
so
we
would
recommend
that
the
implementation
should
also
use
this
act.
You
know
as
a
indicator
of
the
spookiest
retransmit
that
okay,
you're
reordering
window
is
too
small,
so
we
have
spirit.
Retransmit
are
too
pathetically
increase
the
reordering
window.
We
don't
Mende
like
exactly
how
you
should
adjust
or
adapt
to
this
reorient
window,
but
just
saying
that
you
know
you
should
take
this
feedback
off.
Okay,
you
made
it.
G
You
made
the
window
too
small
and
then
you
should
increase
it,
and
the
last
one
is
probably
very
important
that
the
reordering
window
must
have
an
upper
bound
and
it
should
be
set
to
one
round
trip
and
again
this
round
table
is
intentionally
made
a
bit
vague
of
what
exactly
is
the
round
trip
or
in
our
implementation?
What
we
use
is
the
smooth
round
for
time
average,
but
you
can
use
the
most
recent
one
or
a
window
max
filter
of
the
most
recent
RTT.
We
don't
put
like
exactly
how
you
should
do
that.
G
So
with
all
that
said,
how
do
we
really
compute
the
reordering
window?
This
is
the
exact
formula
we
use
in
our
Linux
implementation,
so
our
style
is,
do
we
detect
we're
ordering
or
not
if
we
have
a
simply
ordering
and
we're
already
in
loss
recovery
since
we
already
lost
recovery?
That
means
we
feel
like
this.
G
In
this
situation,
we
are
going
to
optimize
to
speed
the
recovery.
We
believe
that
we
have
made
the
right
decision.
This
is
a
loss,
not
reordering,
so
we're
going
to
set
the
wielding
widow
to
zero
and
really
fire
up
the
retransmissions
very
quickly
other
than
that
we'll
use
again,
like
I,
said
the
three
you
back
rule,
which
is
R
active
batch
registry.
G
So
if
more
than
three
packets
often
act
out
of
order
that
you
are
going
to
launch
the
loss
recovery
process
to
repair
the
packets
and
if
we
have
seen
we
are
doing
so,
none
of
that
now
the
those
cases
match
and
we
have
similar
auditing.
We
are
going
to
set
the
reloading
window
to
a
quarter
of
the
minority
and
we
have
this
multiplier
call
reordering
window
increase,
which
we
linearly
increase
and
this
wielding
window
increased
a
variable
will
be
adjusted
based
on.
Have
you
seen
these
acts
in
the
last
round
trip
or
not?
G
I
But
I
I
just
recently
got
some
fairly
decent
data.
That
kind
of
indicates
that
maybe
min
RTT
over
8
yeah
I'd
be
a
better
starting
multiplier,
especially
since
you're
doing
an
adaptive
back
off
and
I'll
be
presenting
those
at
map
RG,
and
they
should
already
uploaded
off
online.
If
you
want
to
take
a
look,
yeah.
G
So
I
think
three
meetings
ago,
we
actually
had
ran
an
experiment
to
compare
a
corner
on
an
eight.
We
don't
really
see
any
difference
for
TCP
data
and
again
I
think
it's
fine
I'm,
fine,
changing
it
to
a
II.
Just
we
don't
see
any
data,
so
you
can
say
one
way
or
the
other,
and
we
specifically
leave
out
in
the
like.
Rath
does
not
meant
that
you
had
to
be
a
quarter
TT
so
that
you
know
the
implementation
can
always
choose
the
best
value
that
you
they
believe
is
right.
B
G
Can
do
that?
It's
a
good
point
on
next
slide.
Please,
okay!
So
this
I've
talked
about
all
the
traps.
I
pay
and
links
4.18
today
fully
implement
what
I
just
talked
about
in
the
draft
and
it's
on
by
default,
so
the
odhh
rc6
675
the
sack
based
recovery,
which
family
used
to
pack
special,
is
actually
now
a
disabled.
G
So
the
latest
linux,
as
you
use
rock
as
the
sort
of
the
one
and
only
fast
recovery
mechanism,
and
overall
we
have
reduced
labor
heuristic
in
linux,
about
loss
recovery
from
ten
to
two,
it's
rocky
or
PMF
RTO
on
an
F
RTO
includes
see
the
basic
RTO,
of
course,
and
the
over
the
algorithm
are
development.
In
an
implementation,
as
has
been
concluded,
we
don't
plan
any
major
development
work.
Maybe
bug
fixes
and
so
I
think
that
the
at
least
we
have
an
indentation
fully
implement.
G
C
Think
this
law
has
been
updated,
just
tweaked,
so
all
right.
Yes,
then,
and
also
problem-
tried
to
implement
this
draft,
and
then
she
might
come
back
with
some
feedback.
So
maybe
we
can
wait
a
little
bit,
but
I
think
this
Russell
is
now
getting
good
shape.
So
it's
I
think
working
with
Roscoe
is
close,
but
I
would
like
to
wait
a
little
bit
more.
Okay.
B
E
E
E
For
the
mama
sender,
the
protocol
is
essentially
sending
the
least
significant
bits
of
those
counters
across
to
synchronize
the
two
counters,
with
a
round-trip
time
and
a
bit
of
delay
right
so
sort
of
very
simple
conceptually.
The
reason
for
the
duplication
of
the
fields
is
that
if
the
option
is
stripped,
you've
still
got
a
sufficiently
useful
protocol
in
the
main
header,
with
the
ace
field
which
already
burns
the
headers
to
do
the
negotiation
and,
though
sort
of
the
bits
to
do
the
negotiation
and
the
overriding
requirement.
E
If
you
like,
of
a
number
of
use
cases
for
this,
is
to
try
to
reduce
queuing
delay
by
having
more
information,
more
feedback
information
during
a
round-trip
time
and
there
you
know,
there's
existent
case
of
ones
that
give
you
500
microseconds,
can't
delay
over
the
public
Internet
instead
of
current
state
of
the
art
which
is
5
to
15
milliseconds.
Something
like
that.
E
So
the
change
is
in
the
last
cycle,
IETF
cycle
I'm
now
gonna
go
through
and
the
main
one
has
been
in
response
to
a
request
from
Michael
Schaff
to
give
justification
for
why
we've
designed
the
protocol
like
we
have.
You
know
why
we've
used
the
bits
we
have
and
particularly
why
we've
used
the
head
of
bits.
So
there's
three
slides
on
this
that
reflect
three
new
sections
in
a
new
appendix
Appendix
B,
but
in
the
head
of
their
and
the
first.
E
The
first
subsection
talks
about
the
justification
for
the
using
them
on
the
sin
and
the
primary
one
is
that
this
is
how
I
received
3168
ecn
did
its
feedback
using
two
of
those
bits,
the
ECE
and
the
CW
are,
and
the
idea
is
to
be
able
to
fall
back
with,
in
other
words,
it's
backwards
compatible
so
that
a
server
uses
the
latest
fair
into
feedback
that
it
recognizes.
So
if
the
server
doesn't
understand
ecn,
it's
an
old
server,
it
will
see
it
won't
even
look
at
those
flags
whatever
they
are.
E
E
Firstly,
it
was
already
allocated
to
the
easier
nonce,
which
is
now
historic
through
a
process
we
went
through
in
here
in
this
working
group
and
by
combining
that
flag
with
the
other
flags,
both
in
the
handshake
and
during
the
counting
phase.
Once
the
connections
established
you're
actually
getting
eight
states,
whereas
if
you
use
that
flag
as
a
binary
flag,
you
only
get
two
states.
E
The
original
code
point
that
the
nonce
used
we
don't
use
in
accuracy
n
there.
There
are
sort
of
I
got
an
email
back
from
Anna.
Who
did
the
measurement
studies?
Who
was
looking
at
the
data?
To
tell
us
tell
me
how
many
servers
there
were,
but
sorry
I
haven't,
read
it
because
I
was
dealing
with
another
email,
but
it
was
something
like
five
out
of
twenty.
Six
million
servers
responded
with
that
nonce
flag.
K
E
So,
just
to
ram
home
me
the
point
I
made
earlier,
if,
instead
of
using
that
third
third
bit
we'd
have
done
some
other
thing
like
use
a
TCP
option
to
do
this
negotiation
and
reserve
that
for
a
bit
for
some
future
protocol,
as
I
said,
it
would
only
be
able
to
use
that
bit
separately
from
the
other
two
bits.
So
you
wouldn't
have
got
that
benefit
of
the
eight
extra
states
or
the
eight
states
and
also
would
have
had
to
put
an
option
on
a
syn.
E
We
have
avoided
that,
obviously
there's
very
little
space
for
options
on
a
syn
and
you've
got
the
right,
reliability
problem
of
losing
them,
sometimes
because
of
Miller
boxes
and
finally
would
have
had
a
protocol
which
had
all
this
fought
fall
back
in
the
oldc
n
in
the
header
and
what
about
an
option
we'll
have
to
check
for
all
the
inconsistencies
between
things
that
might
be
changing
one
differently
to
the
others
who
had
a
more
complicated
negotiation.
So,
overall,
it
was
just
much
simpler,
much
more
efficient.
E
To
put
the
third
use
that
third
bit
for
the
negotiation
given
3168
had
already
started
using
bits
in
the
header.
It's
it's
unorthodox
to
use
bits
in
the
header.
For
this
you
know,
tcp
options
would
have
been
more
orthodox,
but
it
sort
of
makes
sense.
I
think
this
was
why
Michael
was
asking.
What
was
the
rationale
behind
this
and
I?
Don't
actually
know
I,
don't
know
where
their
own
does.
I
was
involved
in
3168
standardization,
but
I.
Don't
know
the
rationale
for
why
things
are
done
in
the
header
rather
than
those
options.
F
E
Okay,
I'll
move
on
so
the
second
section
of
this
appendix
explains
why
we
use
these
three
bits
in
the
syn/ack.
These
same
three
bits
I
was
just
talking
about,
and
this
has
been
since
draft
Oh
for
we're
now
on
Oh.
Seven.
Actually,
oh,
wait!
If
you
count
my
local
copy,
the
the
idea
here
that
this
table
shows
the
sin
from
A
to
B
always
set
as
one
man.
E
One
we're
assuming
the
accurate
Sen
client
starts
that
off,
then
all
the
possible
cases
or
late
cases
of
how
those
flags
might
be
sent
back
and
we're
ended
up
since
draft.
Oh
four,
using
four
of
the
states
to
reflect
back
what
was
in
the
IP
header
I
just
finished,
explaining
the
table
yeah,
the
the
second
block
of
states,
their.
E
E
E
Bleaching
means
wiping
of
the
ECT
code
point
in
packets,
usually
going
into
the
internet,
and
we
believe
this
is
a
sign
of
bugged,
diffserv,
wiping
which
is
wiping
the
e
CN
field
as
well,
because
this
stuff
thinks
it's
a
all
still
part
of
the
toss
bite,
because
people
who
write
this
code
must
be
more
than
50
years
old.
Right.
E
So
we've
had
to
feed
back
the
lease
value
of
the
ecn
field
in
the
IP
header
coming
in
so
the
server
feeds
that
back
in
the
TCP
header
and
then
similarly
a
number
to
the
little
sort
of
white
number
two.
We
do
the
same
thing
on
the
ACK
to
feed
back
what
came
on
the
sin
x,
IP
header
in
the
TCP
header,
going
back
just
to
check
both
ways.
There's
not
this
bleaching
right,
because
if
there
is
this
bleaching,
it's
quite
serious,
it
you're
sending
ECN.
E
You
think
the
other
ends
negotiated
with
you
at
the
TCP
layer
and
then
you
start
using
ECM,
but
it's
getting
bleached
and
and
then
the
other
end
doesn't
hear
any
congestion
experienced.
That
might
have
happened
before
it
was
bleached,
for
instance,
in
your
a
case
would
be
a
mobile
broadband
router
in
your
home,
then
going
over
a
mobile
network
that
bleaches
it
and
you're
getting
congestion
in
your
access
link.
E
Up
to
that,
so
we
thought
it's
very
important
to
make
sure
that
both
ends
can
detect
this
bleaching
and
then
turn
off
ECM
right,
so
that
consumes
the
last
two
combinations
of
all
possible.
Eight
states
in
the
syn,
ACK
and
again,
like
I,
was
asking:
do
we
have
to
do
that
white?
What's
the
reason
for
doing
that,
so
we've
written
that
in
the
appendix
I
think
Michaels
said
well.
He
certainly
said
he's
happy
with
the
appendix
I.
E
Don't
know
whether
he's
happy
with
us
doing
it,
but
the
other
point
about
this
is
that
that
state
in
column,
B
marked
nonce
and
the
other
one
not
broken.
Although
we've
used
up
all
eight
states,
the
nonce
is
potentially
usable
in
the
future.
If
we
want
to
have
a
variant
of
this
protocol
and
the
broken
state
might
eventually
be
usable
as
well,
so
we
haven't
really
used
up
all
late.
E
So
they
may
become
available
later
next
slide,
and
that
leads
well
into
the
this
slide,
which
is
about
what
space
there
is
for
the
future,
and
this
was
what
Michael
was
particularly
concerned
about.
If
we're
introducing
a
new
protocol-
and
then
we
find
it's
an
experimental
protocol,
so
if
we
find
that
it
doesn't
work
or
we
need
to
change
it
slightly,
how
do
we
do
a
variant
of
it
when
we've
run
out
of
space
for
doing
any
statement
that
it's
a
variant
right?
E
You
know
only
any
version
negotiation,
so
we've
written
down
what
what
space
you've
got.
You've
got
those
two
code
points
I
just
mentioned
on
the
syn/ack,
the
the
one
that
was
used
by
the
nonce
and
the
broken
one
I
think
the
nonce
one
would
be
usable
pretty
much
now
given
there's
such
such
a
small
amount
of
it
about
the
other
one
will
take
longer,
then
there
are
five
unused
code
points
on
the
final
act
of
the
three-way
hand
check
this.
This
part,
two
of
the
previous
one
that
can't
really
be
used
or
I'm.
E
E
So
there
are
seven
unused
there,
but
by
the
time
you're
getting
to
the
last
a
core
the
first
day
to
packet,
you're,
not
really
negotiating
anymore
you're,
just
declaring
you
get,
you
don't
got
any
more
cycles
or
phases
of
the
handshake
to
to
agree
both
ends.
What
you're
doing
you're
just
saying
what
you
are,
so
you
get
a
limited
ability
to
do
version
negotiation
apart
from
those
first
two
code,
points
on
the
cynic
and
particularly
with
TFO.
E
If
you
know
it's
far
too
late
to
have
done
anything
if
you've
already
started
sending
data
or
it
becomes
very
complicated
to
unwind
it
all
right,
but
you
know,
with
we've
stated
what
the
states
are
there
in
the
I've
seen
out
also
the
draft
now
and
so
in
future.
If
someone
wants
to
make
a
variant,
it's
clear
what
they
can
use,
then
that
first
main
bullet
bullet
is
about
variants
of
acura
ECA.
If,
in
the
other
version
of
the
perrolli
parallel
universities
in
the
future,
he
see
an
accurate
ECM
and
everything
doesn't
come
to
anything.
E
How
can
you
reuse
these
bits?
We've
got
five
out
of
the
eight
code.
Points
on
the
scene
are
still
unused
could
mean
something
else
are
listed
there.
They
would
preclude
any
use
of
ecn.
At
the
same
time,
certainly
any
fullback
use
you
might
go
to
sort
that
out
with
us
tcp
option
or
something
and
all
the
weight
of
the
code
points
in
response
to
that
are
available.
Actually
that
sentence
doesn't
makes
it
since
all
eight
except
one.
That
means
seven.
Doesn't
it
yeah
except
yeah,
so
six
of
them.
E
D
D
D
D
E
D
I
So
hi
Brian
Trammell
done
some
measurement
on
this.
It's
not
like
a
really
good
longitudinal
study,
but
we've
done
it
at
like
several
points
in
time.
So,
along
the
time
that
we've
seen
ECN
uptake
going
up
and
we
started
seeing
seee
markings
the
things
that
break
ecn
seem
to
be
correlated
with
the
things
that
break
the
SCP.
So
there's
a
lot
of
toss,
then
I
said
I'm
a
bit
he's
only
the
IP
bits
which
bits
are
you
talking
about?
I
thought
you're
talking
about
that,
the
in
the
IP
header,
not
the
that
he's
beer.
I
L
G
You
chew
on
chain
here:
I
have
recently
explored
a
lot
of
comments
about
this
proposal,
so,
but
to
me,
is
evaluating
this
proposal
with
compare
that
with
the
DCT
CP
e
CN,
which
today,
if
it
cannot
work
for
Y
area
because
DCT
CPEC
every
course.
You
blindly
trust
the
other
and
will
do
that.
Oh
you're.
E
E
So
this
is
a
poor
attempt
to
summarize
all
the
discussion
we've
been
having
that
you
don't
just
mentioned
and
I
think
the
core
of
it
is
generic
receive
offload
and
the
the
problem
that
you
churn
was
about
to
describe
is
that
DCT
CP
has
has
this
sum
step
function
in
the
network,
and
it
generally
gives
you
runs
of
ce4
quite
a
long,
two
of
congesting
experience
for
quite
a
long
time
and
then
runs
of
nothing
for
quite
a
long
time.
It
sort
of
toggles
on
off
and.
E
E
E
You
Chung
has
been
pointing
out
that,
with
generic
receive
offload
Hardware
with
these
long
runs
of
CE
Marking
with
data
center
TCP,
you
don't
get
a
state
transition
because
there's
a
long
run
of
it,
and
so
the
gyro
Hardware
sees
this
long
run
and
it
will
allow
it
all
to
coalesce
and
then,
at
the
end
of
the
run,
when
C
he
stops,
it
will
pass
it
up
to
the
network
or
you
know,
once
the
cache
is
full,
it
will
pass
it
up.
Where's
with
this
counter,
every
packet
is
one
more
see
mark.
E
E
E
You
know
it
just
seems
wrong
right,
so
we
really
do
need
to
try
and
solve
this
problem
with
one
protocol
rather
than
two,
but
otherwise
we
came
up
with
Mia
proposed
these
three
ways
to
accommodate
the
problem
of
GR
Oh
hardware.
One
would
be
to
parallel
drafts,
which
is
the
one
that
we
all
prefer
out
of
out
of
three,
but
none
of
them
are
ideal.
E
You
know
it's
yet
the
the
other
two
would
be
to
have
both
mechanisms
in
the
one
draft
and
because
it's
an
experimental
thing
we
can,
we
can
try
them
both
out
using
these
initial
counter
values
to
find
to
find
out
which
one
you're
using
and
the
other
one
would
be.
T2
users,
DC
TCP
mechanism
within
a
curry,
TN
and
get
rid
of
the
counters,
but
certainly
we
don't
want
to
do
that,
because
one
of
the
that
was
one
of
the
main
goals
of
the
protocol
was
to
get
away
from
that
you
know.
G
Yeah
thanks,
that's
a
great
summary
and
I.
Just
want
to
add
sort
of
my
point
of
view
is
that
it's
always
oh,
if
I
need
to
implement
something
or
add
more
complexity,
to
a
stack,
its
what's
the
benefit
and
very
practically
and
and
I
think
actually
Zn
would,
despite
the
GR
o
issue
with
endo
under
heavy
act
losses
if
far
more
accurate
than
DC
TCP.
But
the
question
I
don't
know
is
how
critical
is
it?
G
Is
it
to
handle
heavy,
a
Colossus
and
when
I
think
of
DC
TCP,
it's
like
it's
a
congestion
control
feedback
that
you
know
at
least
Microsoft,
Linux
and
I
believe
FreeBSD
all
have
so
is
rather
quick
transition.
If
we
just
need
to
use
options
or
headers
to
negotiate
that,
and
it
does
not
have
GR
o
issue,
at
least
in
the
death
and
her
right
now.
So
this
is
sort
of
I
I'm
committed
to
do
more
kind
of
data
comparison
work.
G
E
I,
just
before
Corey
comes
to
the
mic.
If
I
can
just
explain
that
normally
it'd
be
fine
to
just
do
this
by
experimentation
and
you
know
just
bring
out
a
new
version
otherwise,
but
because
we're
so
limited
on
space
and
they're,
so
limited
ability
to
do
versioning
of
the
protocol
where
we're
sort
of
having
to
do
sort
of
either
private
network
studies
or
their
studies,
or
something
to
try
and
work
out
a
solution
based
on
what
we
think
is
going
to
be
the
problem,
because
we
can't
really
do
it
on
the
internet.
E
G
Right,
that's
why
I,
because
I'm
burned
badly
because
of
the
middleboxes
issue
on
open
deployment,
anything
that
has
the
least
middle
box
concerns
always
looks
very
appealing
but
unbiased
yeah,
but
I'll
use.
You
cannot
test
every
single
necro
and
then
you
know-
or
you
know,
make
the
protocol
so
complicated
that
it's
just
very
hard
to
to
implement
and
deployed.
G
So
right
now
I
have
a
slight
preference
of
the
two
pair
of
draft,
because
that
would
just
make
each
of
the
proposals
simpler
instead
of
like
trying
to
really
glue
the
two
together,
but
right
now,
I
really
have
very
good
data
to
make
you
know
to
justify
my
preference,
it's
just
I'm,
always
weighing
towards
simplicity
and-
and
you
know,
middle
box
friendly
I
think
some
people
here
may
not
like
me,
be
you
know
a
favoring
me,
no
boss
friendly
there.
Well,
as
they
say,
middle
boxes
are
for
and
they
should
fix
it.
K
E
K
Two
different
protocol
machines
going
through
the
network
dealing
of
middleboxes
in
two
different
ways,
one
of
which
was
trying
to
work
in
a
controlled
constrained
environment,
one
of
which
was
targeting
something
slightly
different,
both
of
which
will
operate
really
across
the
internet.
In
reality,
when
we
get
there
another
point,
but.
E
M
Neal
card
welcome.
Well,
I
was
wondering
if
one,
if
someone
could
comment
on
a
possible
approach
of
waiting
until
we
have
a
specific
congestion
control
proposal
that
needs
this.
That
is
far
enough
along
that
it
has
performance
results,
demonstrating
that
this
is
definitely
needed
before
we
sort
of
try
to
finalize
this
proposal,
I
guess
sort
of
going
along
with
Yoochun's
comment.
M
I
have
a
similar
concern
about
making
sure
that
we,
if
we're
gonna,
undertake
something
with
this
level
of
complexity
and
possible
metal
box
issues
that
we
are
pretty
sure
that
there's
a
congestion
control
that
really
needs
it
before
we
go
too
far
down
the
road
and
and
also
you
know
if
we,
if
we
deploy
accurate
ecn
and
then
three
years
later
in
the
development
of
l4s,
we
realized.
Oh,
actually,
we
needed
something
slightly
different
for
all.
For
us.
That
would
be
ashamed
to
so
wondering.
E
N
F
Even
so,
for
me,
I
think.
The
best
use
case
at
the
moment
is
actually
easy.
N
plus,
plus
and
like
in
deploying
this
together
is
an
incentive
to
actually
apply
accurate,
easy
n
and
I
think
it
makes
much
more
sense
to
try
to
get
accurate,
easy
and
employed
now,
because
then
you
can
actually
eventually
use
your
new
conditional
troll
on
the
internet,
because
accurately
CNX
requires
the
other
end
to
change
right.
While
your
congestion
control
is
usually
just
sit
aside,
so
you'd
first
need
the
feedback,
and
then
you
can
change
your
congestion
control.
N
N
E
The
if
there
isn't
well
actually
I
wanted
to
I
wanted
to
add
something
to
what
you
Chan
said
that
if
you're
getting
some
loss
but
not
not
a
lot,
if
say
you
get
a
long
run
of
stretcher
coming
back
saying
the
state
machine
hasn't
changed.
You
don't
actually
know
whether
there
was
one
in
the
middle
of
that
stretch
at
that
did
change
cuz
it
may
have
got
lost,
and
then
you
wouldn't
know
about
it.
E
You
said
I
mean
if
it's
not
just
whether
there's
loss,
it's
because
you
don't
know
about
loss
and
you're
using
delay,
backs
and
stretch
acts,
so
you
don't
actually
know,
in
other
words,
it's
a
non-deterministic
protocol,
because
you
can't
tell
whether
there
was
a
message
in
the
middle
that
changed
things
so
so
yeah
accurate
ecn
is
more
robust
generally,
but
it
has
this
gr
OS.
You
know
certain.
N
G
Can
chain
for
fast
and
networks,
like
you
know,
we're
talking
about
say
over
1
Gbps
geo
is
definitely
critical
to
have
for
things
like
you
know,
10
megabits
per
second,
how
links,
probably
not,
but
the
trend
is
that
the
Penguins
is
going
up
very,
very
quickly,
right
and
so
I
assume
that
you
all
will
become
more
and
more
important
until
there
is
a
better
solution
like
basically,
the
offloading
mechanism
is
getting
becoming
more
more
critical.
It
might
be
anything.
E
And
and
I
would
add
that
it
is
likely
that
you
know
if
his
protocol
is
deployed
and
it
becomes
used.
The
G
ro
Hardware
will
be
adapted
to
make
it
efficient.
But
it's
just
that
again
chicken
and
egg
problem
of
what
you
doing
that
in
the
meantime,
do
you
just
have
to
throw
server
resources
at
the
thing,
because
you
cannot
use
Hardware
optimization.
You
know
it's.
N
About
deployment
you
know
not
just
within
the
data
center,
but
outside
or
across
the
entire
network,
because
in
my
understanding,
all
that
traffic
that
you
have
on
the
wire
would
be
tunnelled
and
therefore
you
would
need
a
middle
box
not
just
to
look
at
the
outer
header,
but
outer
and
inner
header
to
be
able
to
do
accurate
ECN.
Is
that
something
that's
a
concern
for
you?
It.
E
Is
concerned,
but
it's
not
a
concern
for
a
crazy
sandwich
and
then
to
any
protocol,
but
that's
concerned
with
the
marking,
marking
the
outer
tunnel
and
there's
there's
a
whole
lot
of
working
that
TSV
working
group
that
I'm
doing
actually
on
on.
You
know
that
they've
been
specification
for
how
ACN
is
dealt
with
by
tunnel
decapsulation
since
2001
and
they're
generally
deployed,
but
you
know
there's
those
cases
where
they're
not
and
so
yeah.
That's
a
that's
a
that's
a
different
question,
but
it's
it's!
M
So
one
thing
I
just
wanted
to
discuss
a
little
more
would
be
just
sort
of
thinking
out
loud
about
this
jirô
issue.
So
the
one
possible
strategy
would
be
to
say:
if
you
want
to
use
a
corona
say
on
you:
can
you
can
use
software
G
ro,
which
is
definitely
you
know
it's
something
we
use
at
Google
and
I'm,
not
sure
you
know
industry-wide
how
much
people
are
you
relying
on
hardware
for
this
versus
software?
But
we
could
just
say
you
know.
M
If
you
want
the
latest
and
greatest
stuff,
then
you
need
use
software
gyro,
which
is
can
presumably
be
I'm
imagining.
We
could
tweak
it
to
be
aware
of
the
accuracy
and
counting
scheme
and
basically
continue
aggregation
and
just
have
a
simple
state
machine
to
sort
of
make
sure
that
it
doesn't
lose
the
information
that's
being
fed
back
from
the
other
side,
but
we'll
still
still
continuing
to
do
aggregation
and
then,
when
it's
got
a
full
sized
aggregate,
it
can
pass
it
up.
M
K
M
M
You
know
how
complicated
and
it
would
be
to
get
it
to
be
high-performance,
because
it's
definitely
a
very
hot
cook
path
and
you
have
to
look
at
the
cash
effects
of
you
know
that
whatever
extra
state
was
was
carried
along.
But,
as
a
bob
said
that
my
intuition
says
you
could
probably
do
it
pretty
efficiently,
because
it's
just
sort
of
addition
and
some
number
of
extra
bytes
for
this
date,
yeah.
E
E
I
realize
we
haven't
acknowledged
each
other's
contributions
in
the
draft
which
we
will
do
and
I
checked
the
recent
contributions
from
Michael
and
pavane
they're,
already
acknowledged
from
previous
countries
over.
So
thanks
very
much
there's
a
next
step
which
actually
I
did
this
morning
to
address
the
outstanding
issues
from
Michael.
E
But
then
Michael's
raised
a
further
point
which
I
I
guess:
I
need
to
bring
up
here
unless
Michael's
going
to
do
on
my
college
ever
or
whatever
anyway
of
channel
for
him.
Michael
is
concerned
that,
because
this
is
a
change
to
the
wire
protocol,
the
any
passive
monitoring
going
on
of
ACN.
If
it's
not
looking
at
the
start
of
the
flow,
then
we're
changing
the
meaning
of
the
bits
in
the
flow
and
that
passive
monitoring
may
be
connected
to
systems
that
are
doing
I,
don't
know,
load,
balancing
whatever
and
they're.
E
Our
may
confuse
those
peasant
monitoring
devices
now
I
responded
on
the
list.
That
I
don't
think
we
can
say
that's
a
problem
for
a
protocol
specification
to
say
that
you
can't
change
the
wire
protocol
ever
because
there
might
be
passive
monitoring
devices
that
don't
work
very
well.
That
aren't
looking
at
the
start
of
the
flow
to
see
what
what
what
this
stuff
means
and
certainly
talking
to
people
who
do
passive
monitoring,
not
particularly
vcn.
They
always
look
at
this
in
anyway.
E
I
I
F
F
F
E
D
D
I
E
I
I
You
just
crash
the
world,
but
I
think
that
actually
what
should
appear
there
is,
like
you
know,
a
paragraph
or
two
you
showing
thoughts
on
white
like
exactly
that
reply.
It's
like
why
this
is
not
a
problem
right
like
that
I
think
would
be
useful.
Sorry
that
is
you
to
look
at
when
there's
no
actually.
E
K
So
gari
first
and
I'm
trying
to
the
minutes
in
a
second
and
but
this
whole
idea
of
monitoring
in
the
network.
You
can
you
can
look
at
this
stuff
in
the
network
if
you
want
to
look
to
see
if
accurate
ecn
is
being
used-
and
that
tells
you
it
and
it
tells
you
that
this
feedback
going
on,
etc,
I'm,
not
sure
we
should
tell
people
not
to
do
this.
If
you
weren't
deploying
a
network
that
relies
on
ACN,
you
should
be
able
to
do
these
track.
K
What's
going
on
here,
the
the
issue
I've
seen
appearing
in
some
EU
discussion
with
operators
was,
should
people
look
at
this
signalling
to
watch,
engage
what
the
net
was
broken,
so
there's
a
difference
between
Cantonese
en
Mart's,
rockeo
TCN
and
the
whole
thing
with
where
the
machinery
works,
and
that
I
think
is
a
more
important
recommendation.
Please
don't
think
an
easier
mark.
Maseeh
e
means
your
networks
broken.
P
P
But
right
but
I
just
I:
should
we
can't
those
if
you're
doing
passive
monitoring
you
have
to
keep
up
with
what
people
are
doing
and
a
deal
with
it?
You
can't
I,
don't
think
we
should
change
what
we
do
to
try
to
not
I,
don't
think
we
should
change
what
we're
trying
to
do
here
or
make
any
decisions,
because
it
might
make
it
more
difficult
for
systems
that
are
doing
passive
monitoring
to
keep
working.
F
Meuk
you
live
in
so
I
think
it
actually
would
not
even
break
passive
monitoring
but
I.
Don't
think
that's
worth
explaining
in
the
traffic.
But
so
if
you
do
it
correctly,
not
the
way
Brian
implemented,
you
would
actually
look.
You
would
actually
look
at
like
when
does
a
bit
change
and
then
figure
out
that
there
must
be
must
have
been
congestion.
F
So
if
you
actually
implement
that,
then
every
time
this
bit
change,
there's
still
this
congestion,
you
get
like
more
congestion
feedback,
because
it's
accurate
easy
and
gives
you
more
congestion
feedback
and
you
still
don't
get
the
the
real
accurate
you
see
encounter,
but
it
still
it's
a
single
of
congestion.
So
it
would
like,
if
you,
if
you
do
it,
it
would
actually
work,
but.
D
D
E
I'm
not
even
gonna,
ask
about
that,
because
you
know
we've
always
got
to
resolve
this
euro
issue
and
things
first,
so
I
think
we
end
there.
I've
just
got
a
couple
of
things
to
add
one
related
to
ACN,
plus
plus,
which
apologies
for
letting
it
expire.
We
just
ran
out
of
time,
and
we've
got
a
meeting
later
in
the
week
to
edit
it
and
get
it
sorted
out.
E
That's
related
to
our
Croatia
and
so
I
thought
I'd
I'd
say
it
now,
because
we're
recommending
in
the
accuracy
and
draft,
but
it
goes
together
with
ASEAN
plus
plus
in
an
implementation
and
the
other
one,
is
that
I've
got
a
presentation
on
the
implications
on
rack
on
the
of
rack
on
the
on
links
and
reordering
in
links
in
TS
vwg.
So
they're,
you
know
it's
a
bit
of
a
sort
of
heads
up
on
that
in
TS
vwg
on
Thursday.
C
Okay,
so
this
is
your
C
and
say
I'm.
Trying
to
talk
about
my
simple
draft.
The
title
is
disabling
pulse
when
other
protections
up
available.
This
is
my
individual
draft.
Okay
also,
let
me
start
from
the
background
of
this
idea.
So,
as
we
already
know,
RFC
73
23
requires
putting
time
stamp
in
all
segments,
I
mean
all
segments
and
then
time
stamp
option
consume
tend
to
Train
bite
in
optional
space
and
then
option
space
ranks
is
just
limited
to
40
bite.
That
means,
if
time
stamp
option
has
been
negotiated
time.
C
Stamp
option
consume
25
to
30
percent
of
the
entire
option
space.
So
it's
groups
like
it's
kind
of
overhead
to
me
and
then
in
the
first
place,
why
we
need
to
put
time
stamp
in
all
segments
and
basically
time
stamp
has
been
used
for
two
purpose.
Mainly
one
is
round-trip
time
measurement,
but
some
research
indicates
that
so
we
don't
have
to
measure
round-trip
time
with
every
single
segment.
We
just
need
several
segments
around
return.
It's
good
enough
to
get
nice
RTD
measurement.
C
However,
in
case
of
a
pulse,
in
order
to
provide
the
protections,
we
need
to
put
time
stamp
in
every
single
packet.
So
that's
it
spouses
that
requires
all
segments
with
tungsten.
So
in
other
word,
if
we
have
protection
other
than
pulse
and
then
we
don't
need
to
put
time
stamp
in
all
segments
so
that
we
can
utilize
option
space
for
other
purposes,
saying
it
can
increase
CCP's
extensibility
feasibility,
a
lot
more.
That's
the
basic
idea
of
this
talk,
move
to
the
next
right.
C
C
Let's
say
we
have
current
and
the
point
here
and
here
and
then
establish
TCP
connection
and
they
extend
TCP
packet.
This
Brown
rectangle
indicate
the
segment
connection
to
the
current
TCP
connection
and
in
some
case
some
pocket
which
belongs
to
the
current
endpoint,
has
been
delayed
significantly
and
so
since
TCP
seek
is
number
space
is
just
a
little
bit
if
the
delay
is
big,
you
know.
Sometimes
this
you
know,
seekest
number
is
delayed
more
than
32
bits
bite
and
then
go
since
then.
This
packet
goes
to
this
current
connection.
C
It
might
be
really
difficult
to
distinguish
this
packet
at
this
top
pocket
and
then
that's
why
we
need
a
pulse
in
this
case.
That's
the
primary
case
for
pulse
and
then
second,
the
cases
previously
this
and
the
point
connect
established
at
this
point
and
then,
but
this
point,
this
previous
endpoint
now
disappeared,
but
still
the
packet
belong
to.
This
connection
still
exists
for
some
reason
and
then
try
to
join
the
current.
C
You
know
connections
in
this
case
if
seekest
number
some,
how
much
to
this
connection,
it's
very
difficult
to
distinguish
this
bucket
and
this
pocket
and
that's
another
case
of
pulse
needed
protection,
and
then
third
case
is
some.
There
is
some
malicious
attacker
somewhere
and
try
to
inject
a
packet
and
then
try
to
do
some.
You
know
attack
to
this
connection,
so
this
is
the
third
case
you
know
pulse
can
provide
the
protection
if
it's
possible,
that's
the
souther
case.
Okay,
move
on
to
the
next
slide.
C
C
If
we,
if
that
this
is
a
you
know,
if
you
t1
minus
t2,
this
one
2
to
the
power
of
31,
it's
newer
by
errors
it's
over
and
then
this
simple
Roddick's
works
for
the
first
case
for
all
first
case
in
a
previous
right
for
all
duplicate
Kate
segment.
In
the
connection,
in
this
case,
it
works
because
timestamp
barrier
monotonically
increase
in
a
single
connection,
so
this
logic
work
properly
in
this
case
and
therefore
the
second
case
in
the
previous
slide
for
segments
from
previous
connections.
C
Sometimes
this
logic
works.
If
the
timestamp
barrier
monotonically
increase
across
or
TCP
connection,
however,
some
improvement
in
some
implementation,
chemist
and
value
does
not
monotonically
increase
across
or
TCP
connection,
so
some
in
this
case,
this
protection
doesn't
work.
So
in
this
case
for
second
case
pulse
protection
may
work
were
main
a
lot
and
then
subtle
case
the
segment
from
attackers.
In
this
case,
this
protection
won't
work
because
no,
as
I
said,
the
pulse
logic
works
and
just
in
a
binary
decision,
número
order.
C
So
if
the
attacker
uses
random
timestamp
and
then
apical
success
rate
will
be
60%.
So
if
you
attack
ascend
a
multiple
packets,
you
know
yeah,
probably
one
with
the
pocket
can
cheat
a
post
protection
so
in
the
sense
pulse
cannot
provide
protection
against
attackers.
So
if
we
think
this
way,
pulse
can
only
be
effective
for
the
first
case
and
for
second
decide
the
case:
it's
not
very
useful,
so
it
might
be
better.
We
have
better
protection
if
possible.
C
That's
another
motivation
hope
this
draft
move
on
to
the
next
ride
and
then
what
kind
of
technology
can
be
used
as
a
replacement
of
a
pulse
and
I
think
we
have
actually
several
technologies
already
and
so
right
now,
in
TCP
working
group,
TCP
Inc
working
group,
we
are
discussing
the
technology
that
can
encrypt
a
packet
and
if
we
increase
the
pocket,
we
can
easily
distinguish.
You
know.
Packet
belong
to
previous
connection
and
the
packets
belong
to
the
current
connection
because
it's
encrypted.
C
So
if
we
see
older,
a
connection
from
the
oldest
take
over
the
connection,
pocket
cannot
be
encrypted,
cannot
be
decrypted,
because
you
know
it's
gamefreak
to
waste,
define
the
key.
So
it's
very
easy
to
distinguish
all
the
packet
and
the
new
pocket,
so
it
can
provide
stronger
protection
and
unpause
much
much
better
protection
and
then
also
in
case
of
TCP.
C
Mp
TCP
and
participate
maintains
64-bit
cities
number
space
and
in
the
session,
and
then
it
used
data
single
signal
options
and
the
replacement,
and
so
this
indicates
us
as
seeking
signals,
mapping
and
then
by
checking
and
then
this
information
is
sent
to
TCP
option.
So
by
checking
this
option
we
can
Despres.
We
can
use
this
option
as
a
replacement
of
the
pulse
and
and
then
data
syncing
signaling
check
is
much
much
stricter
than
pulse,
because
policy
is
just
no
binary
check
knew
our
order
in
case
of
data
signaling
check,
it's
required
exact
match.
C
So
it's
more
in
a
robust
and
strict.
That's
why
you
know
you
can
provide
stronger
protection
than
pulse
and
then
third
option
is
the
TLS.
This
is
application
protocol,
but
it's
encrypted
packet,
and
then
we
have
seen
lots
of
deployment
in
TLS
these
days
and
if
the
packet
is
encrypted,
just
like
TCP
ink,
you
know
we
we
can
easily
identify.
The
package,
belong
to
earth
connection
and
packet
belonging
to
current
connection,
so
it
can
provide
better
protection,
John
Powell's.
So
if
these
technologies
available,
we
can
disable
pulse
and
the
switch
opportunity
this
technology
for
protection.
C
That's
the
basic
idea
of
this
truck.
Okay
move
on
to
the
next
slide
and
then
also
this
kind
of
a
new
protection
can't
have
another
possible
benefit.
So
if
you
are
running
busy
server,
you
might
see
rows
of
you
know
connection
in
time
way,
state
and
it
looks
like
a
kind
of
overhead,
but
this
is
necessary
because
time
way,
state
is
required
to
avoid
seeing
the
segment
from
the
previous
connection
with
same
endpoint,
and
then
you
have
to
wait
to
a
missile
time
to
be
expired
for
this
kind
of
connections.
But
this
is
not.
C
You
know
great
for
the
performance
of
the
server,
but
let's
say
we
have
new
type
of
protection.
I
have
mentioned
in
the
previous
right
and
we
might
be
able
to
recycle
this
connection
in
time
way
state
because,
as
I
said,
the
purpose
of
the
time
rate
is
to
distinguish
target
from
previous
connection
and
the
packet
from
current
connection.
If
the
packet
is
encrypted,
for
example,
or
we
can
easily
identify,
this
is
coming
from
all
packet,
and
this
is
coming
from.
C
You
know,
drawing
it
to
the
current
connection,
it's
very
easy
to
identify,
and
then
some
people
say
you
know,
yahoo
happy
being
used
this
kind
of
protection
by
using
pulse
Bera.
As
far
as
I
know
these
days,
you
know
this
kind
of
protection
is
not
used
because
we
know
there
a
similar
case.
This
no
pulse
protection
becomes
unreliable.
C
For
example,
if
multiple
client
is
behind
the
same
net
and
it
seems
the
same
IP
address
and
then
because
of
this
pulse
logic
can
be
confused
very
easily
and
then
discarded,
target
unnecessary
and
that's
called
the
trouble.
So
because
of
that,
you
know
some
implementation
record
in
acts
to
say
with
this
feature.
So,
but
if
we
have
a
new
type
of
protection,
even
if
the
multiple
client
behind
a
single
nut,
we
don't
have
to
worry
about
its
pocket
to
be
encrypted.
C
So
we
can
safely
distinguish
all
the
packet
and
a
new
packet
so
with
this
is
this
could
be
another
kind
of
benefit
to
use
new
protection,
okay,
move
on
to
the
next
slide,
and
so
what
will
be
needed.
So
since
we
as
I
mentioned,
the
technology
is
already
already
there,
we
have
a
direct
and
we
have
deployment
with
the
technology
or
English
with
the
MP.
Tcp
has
been
deployed.
C
Tls
is
already
deployed
very
well,
and
so
what
we
will
need
is
just
to
have
a
some
kind
of
signaling
mechanism
to
enable
this
feature
and
the
disable
pulse.
That's
what
we
need
here
and
if
and
then
probably
need
signaling,
because
if
we
just
disable
pulse
without
you
know
getting
a
permission
from
the
other
side,
then
when
a
pocket
might
be
discarded,
because
RFC
73
23
requires
to
discuss
segment
without
timestamp
option.
C
So
if
one
side
all
of
a
sudden,
stop
using
pulse
and
then
stop
putting
the
time
stamp
and
then
packet
might
be
discarded,
but
this
is
not
what
we
want
so
by
using
signaling,
because
you
have
to
make
sure
both
sides
agree
to
disable
pulse
and
then,
after
that,
we
can
stop
putting
time
stamp
in
a
TCP
segment.
So
that's
why
we
need
signaling
mechanism,
okay,
move
on
to
the
next
slide,
and
so
there
a
simpler
way
to
implement
signaling
mechanism.
One
way
it's
new,
you
Smith,
define
new
TCP
option
and
do
negotiation
during
the.
C
Okay,
good,
so
one
option
is
define
a
new
TCP
option
and
use
it.
The
future
negotiations
during
shapes
see
exchange,
and
this
is
straight
what
but
no
it's
requires
another
option,
space
in
sixteen
segment
and
then
the
other
way.
It's
extend
timestamp
option
for
future
negotiations
and
Richard
previously,
you
know
proposed
this
kind
of
idea.
So
maybe
we
can
utilize
his
idea
to
some
extent.
That's
another
idea
and
then
third
option
is
extend
protection
mechanisms,
for
example
in
case
of
TCP
yeah.
C
We
might
be
use
one
bit
of
you
know
global
sub
option
in
you
know,
and
then
you
know
in
case
of
TCP
Inc.
You
know
we
can
use
use
it
by
using
this
one
bit.
We
can
negotiate,
which
we
want
to
disable
pass
or
not,
and
in
case
of
MP
TCP
we
might
be
extend
and
be
capable
or
other
way.
We
can
use
MP
experimental
option
in
order
for
shoots
on
negotiation
and
that's
a
plan
and
and
move
on
to
the
next
slide.
Okay,
this
is
the
rust
right.
Oh
you
want
to
waitress
right.
C
What
you
want
to
speak
tell
me,
go
okay,
so
so
this
is
the
rough
slide
on
my
talk,
so
as
I
as
you
know,
this
is
very
pretty
much
idea,
but
so
at
this
moment
I
would
like
to
you
know,
get
a
feedback
on
the
two
point,
and
does
this
look
a
promising
idea
to
proceed,
or
do
you
see
some
serious
program
or
concern
in
this
rough,
and
if
this
idea
looks
promising,
must
should
be
done
to
be
adaptive?
That's
to
point
I
would
like
to
get
feedback
at
this
moment.
E
The
the
previous
slide
seemed
to
assume
that
I
that
you've
got
to
switch
it
off
after
you
start
it,
but
in
all
the
three
cases
you
gave
MPT
CPT's
being
canned
and
TLS,
then
you
know
at
the
start
that
that
you're
using
them,
so
you
don't
have
to
start
using
pores
and
then
stop
because
you
can
just
never
use
it.
Yeah
am
I
missing
something
or.
E
Okay,
okay
I've
got
another
question
which
is
actually
related
to
a
case
where
you
might
have
to
turn
the
on
later.
In
the
connection
going
right
back,
you
said:
there's
a
50%
chance
of
an
attacker
beating
this.
So
it
strikes
me
that
if
you,
if
you
discover
a
case
where
you
detect
an
attack
with
50%
chance,
then
you
could
turn
it
or
was
on.
C
E
Q
So
since
you
brought
it
up,
first
of
the
you
playing
the
cackle
detection,
there's
other
protections
as
well
I
mean
you
have
to
guess
the
right
sequence
number.
So
it's
not
just
the
pause
protection.
That's
protecting
you!
I!
Don't
like
this
well
I
like
timestamps
and
I'll.
Tell
you
why?
Okay
time
stamps
can
be
used
for
a
couple
other
things?
If,
if
you
don't
have
any
any
loss,
you
can
actually
use
them,
has
a
unique
identifier
for
extending
lis
cinco's
sequence
number.
Q
If
you
will,
if
we
made
one
minor
change,
the
time
stamp,
which
you
have
the
time
to
have
often
change
there,
where
you
always
when
you
respond
with
the
sax
and
the
latest
timestamp,
then
they
would
actually
be
an
extension
to
the
sequence
space
already
and
you
wouldn't
have
to
worry
about
it.
Okay,
so
that's
what
I
would
rather
see
that
happen
again.
This!
The
other
thing
is
what
I've
been
playing
with
with
our
BB
r
implementation
is
I
can
look
at
the
time
stamp.
Q
The
other
person
is
sending
me
to
a
series
of
acts
and,
if
I
can
determine
what
timing
value
the
person's
you
of
the
Epirus
using
I
can
actually
more
more
more
accurately
figure
out
what
the
bandwidth
is.
The
peer
is
receiving
my
data,
so
the
time
stamp
option
is
very
handy
to
me
and
I
would
not
like
to
see
it
go
away
with.
C
Q
C
Q
You've
done
a
retransmission
if
you,
if
we
had
unique
time,
stamps
coming
back
on
every
on
every
act
yeah.
So
so,
if
I
see
I
said
you
net,
didn't
you
said
what
was
reflected
back
I
can
then
use
the
time
stamp
that
I
sent
on
that
packet
with
the
sequence
number
to
determine
which
transmission
it
was
mm-hmm
and
it's
it's
a
unique
identifier
right.
Q
B
B
R
Stuart
sure
sure
I
was
thinking
the
exact
same
thing.
To
make
this
work,
you
would
need
some
significant
software
changes
that
you
give
the
segment
to
TLS
TLS,
get
a
decryption
failure.
Hands
it
back
says:
don't
acknowledge
this,
don't
tear
down
the
connection,
wait
and
see
if
another
one
arrives,
so
it
would
be
some
fairly
major
surgery
on
boring,
SS,
boring
SSL's,
be
able
to
reject
segments
without
killing
the
connection.
P
P
G
You
tranch
in
here
is
going
to
pretty
much
say
the
same
thing,
but
and
then
to
me
it's
like
pass
already
works,
it's
not
perfect,
and
now
you
say
it's
this
disabled
and
the
other
layer
to
it
and
or
do
some
kind
of
cross
layer
work
and
that's.
We
needs
a
lot
of
work
and-
and
a
further
point
is
that,
even
in
your
proposal
of
just
no
TCB
times
the
amount
they
eat
our
original
data,
he
wouldn't
work.
They'll,
just
use
it
a
simple
case.
G
So
when
you
send
this
original
data,
I
typically
use
the
MSS
without
timestamp
option,
right,
let's
say:
1460
now:
okay,
because
of
loss
detection,
you
need
to
retransmit
this
one.
You
want
to
insert
that
12
byte
that
won't
work
because
your
exceed
the
MTU
or
the
MSS.
So
you
have
to
send
the
14
48
bytes
in
the
first
packet
and
the
remaining
on
the
next
packet
or
you
need
require
somehow
with
packetization,
and
this
is
going
to
really
hurt
the
TCP
stack
performance,
yeah.
M
Yeah
I
wanted
to
echo
a
few
points
and
add
a
couple
more
so
I
I
were
dumb
I'm
a
big
fan
of
timestamps
as
well
I
mean
I.
Think
Randall
raised
some
of
these
points
earlier.
That
timestamps
can
be
super
useful
when
you
have
retransmissions
and
they
allow
you
to
sort
of
quickly
a
deduce
that
a
retransmission
was
spurious
I.
M
Take
your
point
that
there
might
be
a
clever
way
to
not
do
the
timestamps
until
you
are
doing
a
retransmission,
but
you
know
others
have
pointed
out
that
could
be
problematic.
Another
thing
that
I
haven't
heard
pointed
out
yet
is
that
receivers
actually
make
use
of
timestamps
to
do
a
receiver
side?
Rtt
estimation
in.
O
M
So
another
point
would
be
that
even
though
I
I
do
encourage
folks
to
keep
timestamps
I
do
see
a
potential
value
in
either
negotiating
disabling
pause
or
adding
some
bits
that
sort
of
allow
each
side
to
specify
what
they
want.
The
semantics
of
those
time
zones
to
be
so,
for
example,
I
guess
previously
had
ITF.
M
We
discussed
that
our
team
is
a
big
fan
of
being
able
to
use
microsecond
granularity,
timestamps
and
with
those
really
the
the
biggest
blocker
is
the
fact
that
the
other
side
might
interpret
those
as
millisecond
granularity,
timestamps
and
apply
pause
semantics
and
then
drop
your
microsecond
tagged
packets,
because
they're
they're,
wrapping
too
fast,
essentially
so
I
do
support
a
way
to
either
disable
pause
or
indicate
that
the
timestamps
may
be
advancing
at
make
or
several
gates
and
need
to
be.
The
pause
time
scale
needs
to
be
adjusted
accordingly.
So.
C
M
M
C
I
Hi
Brian
criminal
I'm,
just
gonna,
take
the
other
side
of
the
argument
for
a
moment.
I
think
timestamps
are
terrible,
not
really
actually
I,
quite
like
them.
They
radiate
an
enormous
amount
of
information
about
the
internal
operation
of
the
transport,
which,
in
a
lot
of
cases,
is
good
and
in
some
cases,
is
bad
and
getting
in
points
control
over
when
they
do.
That
is
a
great
idea.
I
I'm
a
co-author
on
draft
night
of
tea
time
stamp
negotiation
I'd
be
happy
to
like
basically
dust
that
off
and
turn
it
into
what
we
need
for
this,
because
so
it
did
like
the
it
was
a
little
bit
at
the
time.
It
was
a
little
bit
of
a
solution
in
search
of
a
problem,
and
now
it
seems
like
we
have
a
it
was
a
different
problem,
but
it
was
not
quite
the
right
solution
for
the
different
problem
because
well
it
was
basically
saying:
okay.
I
So,
like
me,
know
more
about
potential,
so
you
know
in
a
lot
of
your
cases,
you
don't
care
because
you
don't
really.
You
can
already
identify
the
Machine
by
the
IP
address,
but
in
some
cases
you
might
want
to
be
able
to
turn
that
off.
I
think
yeah
that
that
probably
the
right
way
to
go
about
the
TCP
side
of
this
is
to
look
at
ttpm
claims
gain
negotiation
figure
out
of
facts.
I
O
O
L
O
O
So
when
we
want
to
deploy
TCP
on
emerging
platforms,
100
Gig
NICs
lots
of
CPU
cores
the
bottlenecks
we
see
are
usually
inside
application
at
the
border
of
interaction
with
TCP,
namely,
we
need
to
optimize
buffer
allocation
applications.
But
we
can't
really
do
that.
If
we
don't
have
proper
signals
from
TCP,
we
need
to
save
copies
on,
send
and
receive.
We
need
to
get
better
notification
of
what
is
available
in
TCP.
O
We
need
to
somehow
coordinate
with
TCP
threads,
to
basically
try
to
preserve
core
affinity
between
user
space
and
the
kernel
I
mean
II
to
lower
per
packet
overheads
on
both
receiver
and
the
sender.
So,
for
example,
the
G
ro
mechanism
that
was
discussed
today
a
few
times.
We
can't
really
get
to
100
Gig
without
having
GRL
or
TSO,
either
in
software
or
in
hardware.
Next
slide.
Please.
A
O
I
will
also
discuss
some
mitigations
for
those
next
slide.
Please
so
TCP
is
transmitted.
Zero
copy
was
added
by
vilem
in
upstream
Linux
a
few
months
back.
It's
a
very
simple
mechanism.
Everything
works,
just
like
a
normal
UNIX
is
call
you
do
a
send
message
for
sending
your
data.
The
differences
kernel
would
keep
references
to
your
data
blocks
and
send
those
data
without
actually
copying
those
that
data
block
and
because
of
that
kernel,
would
send
user
space
notifications
on
when
this
block
is
not
needed
anymore
by
the
kernel.
O
Note
that
if
it
was
just
net
fare
for
I
curve,
we
could
just
allocate
something
and
then
on
map
it
and
let
the
kernel
D
allocate
the
memory,
but
in
real
world
applications.
These
blocks
are
often
shared
with
the
application.
For
example,
we
can
imagine
a
database
with
the
memory
cache
and
these
blocks
are
actual
memory.
Cache
is
given
to
the
kernel,
so
we
need
to
need
proper
synchronization
with
between
TCP
and
the
application
next
slide.
Please.
O
The
notification
mechanism
in
Linux
is
very
simplistic
for
any
successful
Cisco,
which
basically
means
that
positive
number
returned
from
the
send
message
we
assign
an
sequential
ID
and
then,
when
the
blocks,
even
to
the
colonel
on
that,
like
nemesis
call
is
that
can
be
released.
We
just
send
a
notification
on
the
Eric
you.
This
is
very
specific
to
Linux
the
notification
mechanism,
but
we
imagine
it's
easily
extendable
to
other
platforms.
It's
just
a
control.
O
Channel
Eric
Hughes
are
basically
for
getting
ICMP
packets
from
the
kernel
and
then
it
gradually
is
extended
to
four
timestamps
and
all
sorts
of
notification
to
user
space.
It's
more
of
a
control
channel
right
now
we
had
a
normal
data
channel
for
Cisco's
and
then
Eric
you
at
reacts
for
orchestration,
because
some
there
are
so
many
caveats
in
the
implementation,
but
user
space
should
not
touch
the
data
payload,
while
the
references
hold
is
held
in
kernel.
For
example,
you
can't
imagine
a
nick
that
doesn't
actually
have
a
checksum
offload.
O
If
you
modify
that
it
would.
If
you
modify
the
packets
it
would
cause
it
can
cause
data
corruption
in
the
Linux
implementation
of
zero
copy.
We
have
mitigations
for
this,
for
example,
as
soon
as
we
see
the
packet
is
going
through
a
device
that
doesn't
have
checksum
offload.
We
immediately
copy
the
payload
and
fake
zero
copy
notification.
So
user
space
see
this
as
a
as
an
actual
zero
copy,
but
we
are
actually
doing
copy
inside
occur
next
like
piece.
A
zero
copy
isn't
really
a
magic.
It
says
one
copy
adds
extra
notification
to
the
app.
O
So
if
you
have
just
one
bite
to
sin,
it's
probably
just
cheaper
to
copy,
and
unless
you
have
large
chunks
to
sent
it's
not
really
worth
it,
but
when
you
have
like
500
kilobytes
or
one
megabytes
of
data
or
two
megabytes
of
data
to
send
using
zero
copy,
improves
TCP
performance
by
large
margin,
we
see
double
digit
percentage.
Efficiency
improvement
by
that
I
mean
how
many
CPUs
we
burn.
O
O
O
This
is
a
sender
that
uses
your
coffee.
As
you
can
see,
you
basically
do
a
send
message
with
message:
zero
copy
and
then
you
need
to
pull
the
error
cue
to
read
zero
copy
notifications
and
then
you
can
just
free
the
blocks.
I
put
it
in
the
slides
for
reference.
If
someone
wanted
to
actually
use
this
feature
for
the
application,
this
code
has
a
it's
very
simplified.
It
doesn't
actually
work
for
multi-threaded,
apps
and
I
will
explain
why
in
in
next
slides.
So
there
are
some
caveats
on
how
to
handle
handle
zero
copy
signals.
O
But
the
idea
is
very
simple:
you
just
do
a
send
message.
You
put
the
your
blocks
with
a
sequential
ID,
and
then
you
received
those
IDs
on
Eric
you
and
release
your
blocks.
Next,
like
this,
so
one
caveat
of
TCP
zero
copy
is
we
cannot
release
a
data
reference
in
a
packet.
If
you
received
a
sack
for
that
block,
you
do
cycling
again,
so
we
need
to
make
sure
the
data
is
fully
act
and
even
if
the
data
is
fully
act,
the
can
not.
O
Release
the
packet,
because
the
problem
is
the
packet,
for
example:
I
may
have
a
response.
Retransmission
of
a
packet,
the
packet
may
sit
on
the
device
transmit
queue
for
a
long
time,
while
I
receive
the
ACK
of
that
packet,
so
a
copy
of
that
packet
may
sit
somewhere
on
the
end
house,
and
for
that
reason
we
basically
need
the
signals
are
generated
when
we
really
don't
need
that
packet,
meaning
that
they
did
carefully
act
and
no
copy
of
that
packet
is
actually
sitting
on
the
end
host
anywhere.
O
As
a
result
of
this
for
zero
copy,
they
may
receive
Adil
or
their
notifications,
which
can
complicate
the
release
process
in
user
space.
But
this
is
a
conscious
that
this
is
a
decision
that
was
made
just
to
make
sure
the
implementation
in
the
kernel
doesn't
have
too
much
overhead
of
handling
to
just
coalescing.
These
release
signals
next
life.
O
O
O
Also,
we
need
to
do
that
all
Puri,
because,
while
I'm
reading
from
a
TCP
socket
there
may
be
packets
incoming
onto
that
socket
and
there
may
be
more
data
available
right
after
the
read
returns.
So
we
added
this
to
Linux
any.
If
you
read
some
data
kernel
will
tell
you
how
many
more
bytes
there
are
to
read.
So,
for
example,
you
can
call
receive
message
on
DFT
it
will
with
1k.
O
It
would
return
how
many
bytes
have
been
read
and
then
also
tell
you
how
many
more
bytes
you
can
read
and
using
this
data
we
basically
optimize
the
buffer
allocation
user
space.
We
allocate
enough
buffers
and
then
give
because
kernel
guarantees
that
that
mean
like
that.
65
K
is
actually
available
to
read,
so
we
wouldn't
over
allocate
the
allocates,
are
optimally
and
then
read
the
data
we
see
33
percent
to
5
percent,
more
efficient,
reads
and
writes
for
small
and
large
are
pcs
with
this
mechanism.
O
O
They
read
the
whole
frame.
For
example,
an
inch
T,
2
B
2
server
cannot
actually
do
anything
until
it
has
the
whole
frame
ready
so
that
it
can
process
that
frame
so,
but
we
did
Google
is
basically
using
received
low
Watts.
So
whenever
we
use
a
watermark
to
just
basically
silence
the
events
until
all
the
bytes
we
need
for
processing
is
available.
O
This
way
we
can
be
lowered
the
way
cops
in
userspace
for
TCP,
and
while
we
don't
receive
those
notifications,
we
can
actually
handle
more
useful
notifications
from
other
TCP
sockets
and
we
can
easily
scale
to
thousands
and
millions
of
hundreds
of
thousands
and
millions
of
connections
just
because
we
don't
wake
up
is
furiously
next
slide.
Please.
O
We
have
two
threats,
one
that
two
sets
of
threats,
one
that
actually
does
TCP
processing
and
one
that
actually
reads
from
the
TCP
socket
and
if
the
data
are
the
socket
is
processed
on
another
core
by
TCP
threats
and
then
read
on
another
core
by
a
user
space
threat.
We
see
more
than
50%
of
throughput
regression
just
because
of
invalidating
cache
and
having
to
coordinate
these
CPUs
Linux
uses
very
simple
heuristics.
For
example,
it
used
to
set
the
cpuid
of
a
socket
whenever
you
send
or
receive
or
poll
a
socket.
O
That
basically
means
I
would
expect
the
user
space
thread
to
read
this
FTE
on
this
core.
So
I
would
try
to
process
a
packet
of
this
socket
on
that
core,
and
the
problem
with
our
model
was
that,
for
example,
if
you
have
a
Google
application,
there
is
only
one
polar
and
all
the
if
these
are
pulled
by
one
thread
and
Linux
would
send
all
of
the
packets
of
TCP
sockets
through
the
same
core,
and
then
it
would
userspace
threads
would
read
it
from
other
CPUs.
O
We
removed
this
heuristic
from
Pole
and
saw
immunity
saw
more
than
50%
through
good
improvement
for
those
applications.
But
even
with
that,
this,
like
expecting
that
user
space
would
read
from
a
specific
core
know.
Heuristic
would
actually
worked
because
a
scheduler
moves
threads
around.
There
is
contention
on.
A
O
O
There
is
more
than
10%
gain
as
soon
as
we
do
this
in
efficiency
and
latency,
and
but
what
these
mechanism
is
not
perfect,
because
there
are
constraints
that
may
move
threads
around,
but
this
is
a
very
product,
they're,
very
difficult
problem
to
to
fix.
Unless
we
actually
know
what
is
happening
inside
TCP.
O
So
the
next
slide
I
think
so.
Based
on
this
is
just
a
summary
based
on
the
the
efforts
that
we
had
at
Google
to
try
to
make
sure
TCP
is
as
efficient
as
it
gets
on
emerging
platforms.
We
think
optimizing
TCP
core,
like
congestion,
control,
making
it
resilient
to
reordering
the
loss
is
necessary.
We
can't
really
go
to
those
emerging
platforms
without
these
developments,
but
it's
not
really
sufficient
to
get
ultimate
performance
for
users
based
applications.
O
The
performance
of
TCP
really
depends
on
how
TCP
is
used,
for
example,
even
how
people
DFT
is
how
the
events
are
managed,
how
the
allocate
buffers.
So
we
think,
there's
value
in
to
in
looking
on
how
we
can.
We
can
provide
more
signals
to
the
applications
to
make
better
decisions
and
how
to
to
to
use
TCP.
O
O
But
the
heuristics
that
we
used
for
linux,
on
setting
cores,
for
example,
on
poll,
was
based
on
a
very
specific
application,
and
you
really
didn't
look
at
a
wide
variety
of
applications
that
are
actually
using
TCP
and
that
heuristics
heuristic
was
actually
hurting
performance
of
user
space
just
because
we
were
not
preserving
core
affinity.
So
we
really
two
two
make
sure
all
these
implementation
heuristics
are
guided
by
how
applications
use
TCPS.
At
the
moment
in
this
work
we
use
the
old
and
new
TCP
metrics.
O
For
example,
we
have
a
new
transmit
and
receive
timestamps
and
Linux
that
we
use
for
trying
to
demystify
latency
in
different
parts
of
TCP
yugyeong
added
a
very
nice
Cronos
to
TCP
for
Linux,
so
that
we
can,
for
example,
say
out
of
the
100
seconds.
This
connection
is
open.
How
long
we
have
been
busy
busy
sending
data.
O
How
long
have
been
we
have
been
receiving
the
limited
and
we
use
all
of
those
in
our
in
our
research
other
tools,
such
as
net
nest
at
SS,
Linux,
barf,
flame
drafts,
EBP
fnf
choices
are
very
handy
in
debugging
these
issues
and
making
sense
of
the
latency
problems
this.
This
is
the
last
light.
Thank
you
for
your
time
and
if
there's
any
questions.
D
I'm
generating
a
thank
you
for
presentations
all
this
is
quite
interesting
and
thank
you
for
discussing
something,
that's
different
from
just
drafts
and
bits.
So
I
want
to
ask
so
see
you.
You
talk
about
various
things
and
on
the
slide,
I
see
the
cooking,
large,
TS,
O's
and
so
on,
but
to
some
degree
your
writing
here.
You're,
conflating,
workloads
and
I
would
like
to
see.
D
But
I
would
like
to
see
if
you,
if
anybody's,
trying
to
quantify
the
value
or
lack
of
value
of
T,
so,
for
example,
on
on
implement
of
public
facing
workloads
in
specifically,
because
I
know
that
Google
uses
FQ
and
how
much
you'll
actually
plays
into
performance
for
delivering
content.
Can
you
speak
to
that.
O
Sure,
well,
you
showing
a
nil.
Are
there?
You
have
deeper
knowledge
of
that
part
of
the
infrastructure,
but
for
most
of
most
of
the
things
I
talked
about
today
are
focused
on
100,
Nick,
Nick
fabrics
and
Nick's
on
emerging
platforms,
with
like
less
than
in
order
of
tens
of
microseconds
of
RTT,
but
except
for
zero
copy,
which
actually
can
save
a
single-digit
CPU
cycles
when
serving
content
to
users
base,
especially
content
that
has
to
be
processed,
and
it's
not
a
static
file
for
T,
so
chunks,
specifically
for
100
Gig
Nick's.
O
We
can't
really
saturate
the
neck
without
tears.
Oh,
but
Dad
can
cause
congestion,
zazz
I
think
you
have
been
referring
to
for
interface,
interfacing
traffic
or
connections
with
lower
bandwidth.
So
this
is
a
double-edged
sword,
but
for
100,
Gig
Nick's
having
a
full
tier
so
sent
out
of
the
Nick
without
engaging
software
is
a
must.
We
can't
really
reach
100
without
resources.
M
Yeah
Neal
card
well
Google
yeah,
just
to
run
to
what
sir
hello
was
saying
and
to
address
Jonah's
questions.
Yeah
I
would
agree
with
this
or
how
about
most
of
these
things
being
able
to
benefit
internet
facing
applications,
and
particularly,
we
definitely
have
experience
with
TCP
zero
copy,
hoping
internet
facing
applications
that
send
a
lot
of
data.
M
It's
a
video
for
example,
and
then
as
far
as
TSM,
oh
yeah,
so
definitely
TSO
helps
internet
facing
stuff
as
well,
obviously
to
it
it's
to
a
lesser
extent,
because
for
internet
pacing
workloads,
obviously
the
bad
wits
are
not
hundred
gigabit,
obviously
they're
closer
to
10
megabits
15
megabits.
That
kind
of
thing,
but
even
if
those
lower
bandwidths
there's
a
significant
benefit
from
TSO.
So,
for
example,
the
Linux
team,
so
auto-sizing
logic
tries
to
create
TSO
jumbo
packets
or
bursts.
M
J
S
Okay,
hello,
everyone,
my
name
is
Carlos
Gomez
and
one
of
the
authors
of
the
draft
entitled
TCP
usage
guidance
in
the
internet
of
things.
So
basically
this
would
be
a
quick
presentation,
basically
heads
up,
as
the
chair
said
before,
on
the
fact
that
the
authors
of
this
document
think
that
it
is
getting
ready
to
request
a
working
group
last
call
quite
soon.
So
maybe
some
of
you
may
remember
that
I
presented
the
very
first
version
of
this
document
in
the
CPM
back
in
IDF
96
in
Berlin
and
since
then,
after
them
it
was
decided.
S
The
home
for
the
document
would
be
the
lightweight
implementation
guidance
working
group.
So
the
document
has
been
presented
several
times
there,
although
Michael
sharp,
whose
culture
here
in
TCP
and
also
co-author
of
this
document,
has
been
keeping
the
document.
Sorry
keeping
this
idiom
in
the
loop
through
the
mailing
list.
So
our
plan
is
to
submit
the
next
version,
which
will
be
0
for
possibly
a
minor
editorial
update
around
September
October
and
then
request
a
presentation
in
Bangkok
in
both
TCP
m
and
also
a
week
and
possibly
around
the
time
request.
Working
robles
call.
S
So
next,
please,
as
a
reminder,
the
goal
of
the
draft
is
not
to
define
a
new
TCP
version,
not
to
define
new
TCP
mechanisms.
It
is
to
describe
how
TCP
can
be
used,
configured
or
implemented
in
constrained
works,
which
are
quite
typical
in
IOT
scenarios
and
present,
which
other
related
trade-offs.
Next,
please
so
as
a
quick
overview
of
the
contents
in
this
document.
This
is
the
table
of
contents.
S
So,
after
the
introductory
sections,
we
have
section
3
that
provides
the
main
characteristics
of
constrain
node
networks
that
may
be
relevant
for
TCP,
comprising
node,
Network
and
link
properties,
also
uses
scenarios
and
traffic
patterns.
Then
section
4
provides
the
TCP
implementation
and
configuration
in
CNN's
recommendations
and
content.
So
this
is
organized
in
three
subsections.
The
first
one
relates
with
mechanisms
or
parameters
that
are
related
with
path
properties
such
as
MSS,
the
ACN
and
so
on.
Then.
S
Then
section
5
discusses
how
applications
may
use
TCP
connections
and
related
aspects.
Then
we
have
security
considerations
which
are
quite
specific
to
this
topic,
and
finally,
we
have
an
annex
which
intends
to
survey
the
characteristics
of
the
main
implementations
of
TCP
for
constrain
devices.
Next,
please
so
to
illustrate
this.
We
have
a
summary
table
from
the
annex
which
intends
to
collect
details
on
parameter
settings
and
whether
some
mechanisms
are
supported
or
not,
and
also
memory
requirements
in
terms
of
code
size
for
eight
different
constraint,
well
disappear
plantations
for
constrain
devices
so
of
next.
S
Please
so
again,
the
plan
is
to
submit
a
zero
for
around
September
October
then
possibly
request
a
working
group
last
call
by
that
time
and
in
preparation
for
that.
It
would
be
great
if
we
could
get
your
feedback.
Your
comments
on
the
document,
so
thank
you
very
much
in
advance.
I,
don't
know
if
there
may
be
questions
or
any
question.