►
From YouTube: IETF111-TCPM-20210727-2300
Description
TCPM meeting session at IETF111
2021/07/27 2300
https://datatracker.ietf.org/meeting/111/proceedings/
A
Okay,
then
we
start.
This
is
tcpm.
We
have
three
co-chairs,
as
you
kill
me,
microsoft
and
myself,
I'm
moderating
on
this
meeting
this
time.
A
You
should
have
seen
the
note
well,
at
least
when
you
registered
for
the
ietf,
so
we
are
bound
to
these
rules
logistics.
We
have
a
notetaker
gary
agreed
to
do
this.
Thank
you
very
much
for
doing
this
again,
a
javascript
is
michael
scharf,
at
least
when
he's
not
presenting.
So
if
there
are
comments
during
this
presentation,
maybe
someone
else
can
put
this
stuff
in,
but
as
far
as
I
know,
this
is
also
reflected
in
the.
A
And
the
chat
and
meet
eco
if
you
want
to
submit
a
draft
which
is
of
particular
and
which
is
of
interest
of
this
working
group,
please
include
tcpm
in
in
the
name.
So
it's
easy,
it's
simpler
for
us
to
find
these
documents,
then,
as
usual,
the
session
is
being
recorded,
so
whatever
you
say,
or
if
you
turn
on
your
camera,
this
is
recorded
and
for
the
presenters
meet
eco
has
a
feature
where
we
can
upload
the
slides
to
the
tool
which
has
been
done,
and
so
you
can
run
your
own
slides.
A
So
if
you
want
to
do
that,
you
can
do
that.
If
not
yoshi
will
run
the
slides.
A
This
is
the
agenda
for
today
our
working
group
status
update,
and
then
we
have
four
presentations
all
on
working
group
documents,
many
status
updates.
The
documents
are
in
various
dates,
as
we
will
see
here,
so
we
have
done
or
finished
our
work
on
two
documents
since
the
last
ietf,
the
2940
base
document
is
in
the
rfc
editor
queue,
it
will
be
rfc
1940
and
the
793
biz
is
the
isg
last
call.
So
these
documents
are
oh,
it
is
almost
done
by
the
way.
A
From
the
perspective
of
the
working
group,
we
have
a
milestone
on
the
next
document,
which
is
an
update
of
prr,
which
hasn't
changed
since
the
last
ietf,
and
we
don't
have
a
presentation
this
time.
So
we
hope
that
we
get
a
presentation,
at
least
by
the
next
ietf.
A
The
next
four
milestones
we
have
covered
by
presentations,
the
rfc
8312
biz
document
is
actually
in
or
after
working
group.
A
Last
course
of
the
working
class
was
started
and
we
haven't
formally
closed
it,
but
we
will
see
we
will
get
a
presentation
on
the
changes
which
have
been
made
due
to
the
comments
on
last
call-
and
this
is
your
last
chance
to
provide
comments
and
the
other
documents
are
still
being
worked
on
the
last
three
documents
with
milestones
end
of
this
year
or
later
we
don't
have
presentations
on,
but
these
documents
still
have
some
time
to
be
worked
on
yeah,
that's
the
status
of
the
documents,
any
questions.
A
Comments,
if
that's
not
the
case,
then
we
can
move
on
to
our
first
presentation.
Hi
start
plus
plus.
A
A
C
Okay,
hello,
everyone.
Today,
I'm
going
to
be
talking
about
high
start
plus,
plus,
particularly
some
recent
work.
We
have
done
to
add
jitter
resiliency
to
this
algorithm.
Now
this
is
the
work
done
with
ian
matt,
but
I'm
going
to
be
presenting
today.
C
So
a
quick
recap
of
what
is
high
start
plus
plus,
so
the
original
motivation
for
this
algorithm
was
that
standard
slow
start,
as
described
in
rxc56081,
can
overshoot
the
ideal
send
rate,
sometimes
as
much
as
by
2x,
and
can
cause
massive
packet
loss.
It
basically
results
in
the
connection
spending
a
bunch
of
time
in
recovery,
sometimes.
D
C
So
under
draft
zero
one,
it
was
a
simple
modification
to
slow
start.
The
delay
increase
algorithm
from
the
original
high
start
paper
was
used.
D
C
Trigger
an
early
exit
from
slow
start,
when,
basically,
we
detect
that
the
deal
is
increasing
due
to
the
bottleneck
buffer
filling
up,
but
because
that
sometimes
this
would
cause
a
premature
exit.
We
would
compensate
for
that
by
introducing
something
called
as
limited,
slow
start,
which
still
grows
the
condition
window
faster
than
condition
avoidance,
particularly
because
even
with
cubic
the
ramp
up
would
take
some
time.
C
C
C
That
high
start
plus
plus,
was
leading
to
support
sub
optimal
performance,
which
was
basically
there
would
be
latency
spikes
and
van
transfers.
That
would
trigger
high
start
and.
C
C
We
might
have
to
go
back
and
look
at
why
this
was
happening,
but
it
was
pretty
consistent
for
at
least
a
week
or
so
there
are
other
reports
of
problems,
performance
problems
due
to
high
start,
particularly
on
editing
networks.
There
was
one
presentation
in
map
rg
last
year,
which
talked
about
the
performance
being
not
good
for
mobile
radio
networks.
This
was
also
present
issue
by
christian
in
the
tcpm
mailing
list
and
there's
also
like
a
paper
about
for
high
star
performance
over
a
satellite
network
that
was
presented
in
the
night
dev
conference.
C
E
C
E
C
From
the
slow
start
is
correct,
or
is
spurious
so
effectively,
the
change
to
the
algorithm
is
that
once
we
trigger
the
exit
from
slow
start,
we
enter
a
conservative,
slow
start
phase.
The
goal
of
the
conservative
slow
start
phase
is
to
still
grow
the
congestion
window,
not
as
aggressively
as
slow
start,
because
obviously
that
goes
back
to
the
original
problem
of
overshoot
grow.
C
The
condition
windows
slower
than
original
slow
start,
basically,
as
a
fraction
of
what
you
know,
standard
slow
start,
what
I've
done
and
then
we
measure
whether
the
rdt
shrinks
any
time
during
this
phase.
If
the
rtt
shrinks,
the
exit
was
spurious
and
we
effectively
resume
high
start
plus
plus,
which
means
we
go
back
to
doing
standard
slow
start
and
including
you
know
the
whole
algorithm
from
the
beginning
which
which
continues
to
measure
if
there's
further
delay,
spikes.
C
D
C
From
slow
status
queries
or
not,
so
these
are
the
details
of
the
algorithm.
So
in
in
standard
slow
start,
we
update
the
conjunction
window
as
per
the
rfc
5681.
C
C
Rtt
samples
are
taken
in
that
round.
Then
we
check
if
the
current
round
minority
is
greater
than
the
last
round,
plus
a
threshold.
So
the
threshold
is
basically
capped
at
both
the
minimum.
C
There's
some
discussion
on
this
that
I'll
bring
up
in
the
last
slide,
there's
a
new
change
to
the
algorithm.
Now
so,
when
we
do
exit
slow
start,
we
now
take
a
baseline
measurement
for
the
conservative,
slow
start
phase,
so
the
basically
at
the
round
where
we
exit
slow
start,
we
snapshot
the
current
rounds.
Minority,
the
conservative,
slow
start
phase
lasts
a
few
rounds
currently
per
the
draft.
The
recommendation.
C
Is
because
it
could
be
triggered
in
the
middle
of
our
own?
That's
why
it's
four
to
five
in
each
act.
In
this
phase,
we
continue
to
operate,
update
the
congestion
window.
C
The
growth
factor
here
is
basically
a
fraction
of
the
condition
window,
as
grown
by
standard
slow
start,
so
we
have
a
growth
divisor.
This
is
another
value
that
is
worth
experimenting
with
and
then
for
each
round
in
css
we
go
back
to
detecting
if
the
rdt
is
shrinking
or
not
so
effectively.
If
the
current
main.
D
F
D
C
C
Start
so
the
I
can
explain
a
little
bit
what
the
each
of
these.
D
C
Cases
are
so
basically
25
milliseconds,
for
example,
for
the
first
set
of
bars
is
the
rtt
of
the
link.
100
mbps
is
the
bandwidth
312
packets
is
the
bottleneck
buffer
size,
64
000
bytes
is
the
I
o
size
and
500
number
of
ios
we're
trying
to
effectively
measure
good
port,
which
is
also
a
sort
of
proxy
for
flow
completion
time.
So
what
we
find
is
that
with
within
even
with
the
new
algorithm,
the
performances
is
really
good.
C
C
Particularly
doing
really
well
compared
to
having
no
high
strength
in
the
presence
of
jitter.
So
in
this
case
the
two
parameters
are
added.
Here.
One
is
the
40,
ms
I'm
talking
about
the
very
first
set
of
bars
again
gt.
Ms,
is
basically
40.
Milliseconds
is
the
maximum
jitter
that
we
introduced
for
a
given
packet
and
20
deviation
means
there's
a
one
in
20
chance
that
a
particular
packet
will
experience
jitter.
So
we
basically
introduce
artificial
jitter.
C
These
are
lab
measurements
with
an
emulated
bottleneck
link.
So
here,
as
you
can
see
in
the
presence.
C
High
start,
but
when
you
with
with
the
new
algorithm,
we
do
much
better
than
the
older
algorithm.
There
is
like
a
couple
of
test
cases
here
where
it's
worse
than
not
having
high
start,
but
in
sum
total,
when
you
look
at
you
know
the
previous
slide,
where
we
had
no.
C
We
still
believe
that
you
know
the
default
of
having
high
start
is
still
makes
sense.
The
number
of
rounds
we
did,
try
varying
it.
So
these
are
just
some
results
from
both
different
non-general
cases
where
you
vary
the
number
of
rounds.
So
across
all
these
test
cases,
we
found
that
the
four
to
five
rounds
number
makes
the
most
sense
in
terms
of
performance
overall.
Across
these
test
cases
we
did
compare.
You
know.
C
C
So
this
is
basically
how
long
you
end
up
spending
to
determine
if
the
exit
was
spurious
or
not,
and
I
would
like
to
mention
that,
because
we're
still
growing
the
condition
when
the
performance
will
remain
good,
so
we
are
erring
on
the
side
of
still
keeping
performance
good.
While
we
detect
whether
the
exit
was
spurious
or
not.
C
And
doing
real-world
ap
measurements,
we
don't
have
the
data
to
share
at
the
moment,
but
we
would
be
happy
to
share
that
with
the
working
group.
As
soon
as
we
have
the
data,
we
would
like
to
reevaluate
the
fixed
rdt
threshold
clamps
right
now
they
are
4
millisecond
and
16.
Milliseconds
bob
had
a
question
about
whether
on
the
mailing
list,
whether
these
values
could
be
determined.
D
C
A
function
of
the
rtt,
that's
a
little
counter
intuitive
to
me,
because
the
bottleneck
buffer
occupancy
is
not
directly
related
to
a
given
flows
rtt.
So
while
we
would
like
to
make
this
dynamic,
it's
not
clear
to
me
that
how
we
should
go
about
doing
that
right
now,.
C
We
do
find
that
going
lower
than
four
milliseconds
introduces
a
more
noise
and
the
16
millisecond
number
is
certainly
worth
revisiting.
C
To
figure
out
how
to
make
those
thresholds
more
dynamic
and
one
question
for
the
intern
for
the
working
group
is
so
currently.
This
is
on
standards
track
based
on
previous
discussion
in
the
working
group.
One
outstanding
question
I
have
for
the
group
is:
should
we
change
the
status
to
experimental
because
we
are
still
making
changes
to
the
algorithm
and
and
improving
it?
So
I
would
like
to
hear
from
both
the
chairs,
as
well
as
the
working
group
on
what
the
intended
status
should
be
and-
and
I
think
that
concludes
the
presentation.
A
Thank
you
for
the
presentation
yeah.
So
the
chess
want
to
know
the
opinion
of
the
working
group
so.
G
G
Okay,
how
can
I
speak?
Okay,
so
well
one
question
I
have:
is
there
several
tuning
parameters
like
css
growth,
divisor
and
then
so,
while
you
are
preparing
drugs,
there
might
be
a
possibility
that
you
might
want
to
change
the
recommended
value
right
or
you
want
to
publish
it
and
then,
when
you
publish
a
new
version
like
for
other
proposal
standard,
you
want
to
put
a
new
value,
which
was
your
intention.
C
C
Now,
for
example,
the
the
divisor
you
know
it
would
determine
how
aggressive
or
not
your
condition
window
growth
is,
while
you're
determining
whether
they
were
spurious.
So
there
is
trade-off
here
in
terms
of
performance
versus
being
conservative.
Similarly,
you
know
how
many
rounds
you
spend
for
detecting
it,
for
spurious
also
has
trade-offs
right.
So
we
would
pick
these
values
based
on
experimental
results
and
real-world
data.
You
know
all
of
these
are
worth
experimenting
with.
C
The
values
that
we
have
currently
put
in
the
draft
are
a
result
of
lab
measurements,
but
the
draft
would
also
say
that
you
know
implementions
can
experiment
with
these
values.
So
I
guess
my
question
to
the
to
to
the
community.
B
C
G
C
No
so
right
now
yeah
the
the
rdt
threshold.
Certainly
not.
I
think
that
those
those
constants
are
something
that
would
be
nice
if
they
were
dynamic.
It's
just
unclear
that
a
tcp
could
determine
what
that
should
be
based
on
a
given
given
bottleneck
buffer,
because
it's
extremely
difficult
to
measure
the
buffer
occupancy
on
the
bottleneck
buffer
size,
because
this
is
less
a
function
of
the
connections
rtt.
C
C
Is
building
up
so
I
would
like
to
know
if
you
know,
even
even
even.
D
D
D
C
We
make
the
dynamic,
I
guess
it
equally
applies
to
libra
plus,
I
think.
D
C
More
more
a
research
question,
and
I
would
appreciate,
if
folks
have
insights
on
that.
What
I
would
say,
though,
that
you
know
with
the
values
that
we
have
measured.
The
algorithm
is
extremely
effective,
so
based
at
least
one
data.
Even
with
these
fixed
thresholds,
it
seems
to
work
really
well
in
practice.
G
G
H
Yeah
yeah
go
ahead,
bob
okay,
I
just
I
don't
really
have
an
opinion
on
experimental
versus
proposed
standard,
but
I
just
wanted
to
clarify
when
I
made
the
points
about
the
rtt
thresholds,
it
it
was
really
more
and
it's
relevant
to
this
question
of
experimental
or
supposed
standard.
It
was
really
more
whether
those
if
you
can
write
in
the
draft
how
you
get
to
the
numbers,
that's
probably
more
important
than
the
numbers.
H
You
know
I
mean
you
know
what
they
are
relative
to
other
things
and
why
you've
chosen
them
and
things
like
that.
Then
then,
as
the
environment
changes
in
the
future,
even
if
even
if
the
rfc
becomes
outdated,
people
can
still
get
information
out
of
it
as
to
what
things
should
how
things
should
be
set.
C
E
B
I
So
I've
been
hearing
a
lot
of
of
discussion
around
constance
and
now
bob
also
comes
comes
up
with
determining
how
these
constants
could
be
tuned
to
a
specific
environment.
I
But
overall,
I
think-
or
I
would
lean
more
to
answer
your
original
question
toward
experimental
status
at
this
point
in
time
as
long
as
it's
as
long
as
there
is
no
well-defined
environment,
where
the
high
start,
plus
plus,
should
be
using
a
certain
set
of
recommended
values
for
all
of
these
things
that
you
mentioned,
because,
ultimately,
at
the
later
stage,
we
still
have
a
process
to
go
from
experimental
to
proposed
planet.
I
E
Marching
yeah
as
an
individual,
you
know
doing
things.
Experimental
does
create
problems
when
we
try
to
incorporate
this
into
other
standards.
I
I
would
say
that
if
the
working
group
and,
of
course
the
proponents
feel
strongly.
E
Is
generally
applicable
and
that
I
don't
want
to
say
there
are.
E
But
that
that
probably
would
be
comfortable
to
deploy
it
on
just
about
every
tcp
host
out
there,
but
that
would
have
solitary
effects
for
everyone.
I
think
if
we
reach
that
level
of
confidence,
I
think
we
should
make
it
a
close.
J
Yeah,
I
think
that
praveen
you,
you
have
to
be
proud
of
your
work
and
just
go
for
standard,
because
your
work
does
fix
a
bug
in
the
original
high
start,
and
so
hey,
let's
go
for
it
and
if
there's
an
issue
that
has
to
be
fixed,
well,
then
fix
it,
but
go
for
it.
B
Yeah,
this
is
michael
speaking.
First
I'd
like
to
mention
that
there
are
a
couple
of
plus
one
from
standard
track
and
chat,
so
that
is
also
something
to
be
taking
the
account.
Second,
with
the
chair,
I
think
we
don't
have
to
make
that
call
right
now,
because
we
can
keep
the
status
as
standards
track
for
the
time
being
and
decide
during
working
group
last
call
better
change
our
mind.
B
We
have
done.
I
think
the
similar
thing.
D
A
View
yeah.
My
point
is
that
if
you
have
parameters
in
your
algorithm
and
the
values
depend
on
conditions
or
trade-offs,
that's
not
a
reason
for
me
to
go
for
experimental.
So
if
you,
if
you
can
lay
out
this,
is
the
algorithm.
These
are
the
parameters.
A
A
Comments,
I
have
one
question,
sir,
when
you,
when
you
send
packets,
are
you
sending
them
in
bursts
or
are
you
putting
some
delay
between
them.
C
Right
now,
no
there
is
no
placing
here
they
will
be
sent
over
in
birth.
There
is
a
capital
number
of
on
the
amount
of
burst
as
a
as
a
function
of
how
the
abc
limit
is
computed,
so
an
act
can
only
generate
so
much
data
in
return
so
that
that's
the
cap
that
sort
of
protects
the
birth
space
releasing
our
implementation,
but
I
see
no
reason
why
a
facebook
implementation
would
not
also
benefit
from
this.
D
F
C
Yeah
so
the
spec
said
to
historically
the
tcp
implementation
windows
used
the
value
four,
because
two
was
deemed
as
too
limiting
in
terms
of
for
performance
later,
given
the
networks
have
changed
to
be
faster,
particularly
switches
and
buffers
and
switches
and
routers
have
like
larger
buffers.
Now,
when
we
experimented
with
these
values,
it
was
found
to
be
the
sweet
spot
going
any
higher.
For
example,
16
was
leading
to
a
much.
C
A
A
Questions,
if
that's
not
the
case,
then
thank
you
and
we
move
on
to
the
next
agenda.
Item
michael
is
going
to
present
some
work
on
the
tcp
yang.
A
B
D
B
I'd
like
to
give
a
brief
update
on
the
tcp
yang
module.
B
This
is
joint
work
with
vishal
and
in
particular
mahesh,
who
is
currently
I
think
in
who
might
be
in
the
other
in
the
ideal
working
group.
While
I
present
that
so
before
we
go
into
the
details,
let
me
briefly
give
a
recap
of
what
this
draft
is
about.
So
this
is
a
very
small
and
well
focused
yang
module
for
tcp
configuration.
B
It
basically
covers
two
different
aspects:
first,
it
more
or
less
translates
content
of
the
tcp
map
into
yang
semantics.
That's
basically
basic
statistics
in
the
connection
list,
and
second,
it
contains
configuration
parameters
for
tcpo
and
also
md5,
simply
because
we
have
use
cases
for
that
specific
configuration
parameters.
We
were
very
careful
in
adding
configuration
parameters,
so
we
only
add
to
this
young
document
configuration
that's
actually
needed
elsewhere
in
the
ietf,
and
that
use
case
at
the
moment
is,
in
particular,
the
bgp
yang
module.
B
B
B
B
Document
on
in
netconf,
tcp
client
server
configuration.
B
The
document
itself
is
a
one
of
the
smallest
ei
modules
in
the
ietf,
given
the
very
narrow
scope,
so
it
fits
on
one
slide.
So,
as
I
mentioned,
it
basically
includes
a
connection
list
statistics
and
those
are
the
equivalence
to
the
tcp
map,
and
then
it
includes
for
each
connection
in
the
list.
Tcpo
configuration
and
a
very
simple
statement
for
md5
in
case
that's
needed
and
an
import
for
keeper
live.
So
it's
a
very
easily
readable
document.
B
B
B
One
of
the
smallest
gang
documents
in
the
itf
right
now,
we've
updated
the
document
a
couple
of
weeks
ago
and
the
current
version
is
the
co2
version
and
we've
addressed
in
the
co2
version
on
the
comments
that
we
got
in
the
last
meeting.
B
B
B
Questions
that
we
discussed
last
time,
where
what
happens
if
a
netconf
client
creates
a
new
entry
in
the
tcp
connection
list
and
also
what
happens
if
it
deletes
one
and
we've
added
descriptions
to
the
3d
model
for
those
cases,
and
then
we
simplified
the
model
by
removing
imports
from
the
netconf
model,
which
doesn't
make
a
lot
of
sense
there.
So
we
discussed
this
last
meeting,
then
we
also
had
a
discussion
during
the
last
meeting
about
the
reset
rpc
call
that
is
in
the
yang
module.
B
So
there
was
a
very
helpful
comment
from
richard,
or
I
think,
michael
dixon,
that
it's
not
very
complex
to
implement
such
a
reset,
and
we've
mentioned
that
trick.
Basically
now
in
the
description
and
kept
the
rpc
as
it
was
before
and
last
but
not
least,
there's
also
something
that
I
mentioned
before.
We
improved
the
description
of
tcpo
modeling,
based
on
feedback
from
nokia
and
juniper.
B
B
I
see
praveen
in
the
queue
so
praveen.
If
you
have
any
questions
on
those
lights,
please
feel
free
to
step
up.
B
Okay,
the
key
open
issue
right
now
is
that
at
the
moment,
there's
no
implementation
of
that
model
and
we
are
trying
to
change
that.
So
I
reported
that
also
during
the
last
meeting
that
we
are
working
on
a
prototype
for
that
young
module.
I
work
with
these
two
students
who
made
progress
on
an
implementation
of
that.
So
I'd
like
to
give
a
brief
update
on
what
we've
done
since
the
last
meeting.
B
So
we
use
a
netcon
server
software
clickson,
which
is
an
open
source,
netconf
implementation
that
is
pretty
powerful,
also
relatively
simple.
It
has
one
downside,
namely
that
it
doesn't
support,
nmda
and
so
forth.
So
that's
one
of
the
issues,
but
other
than
that.
It's
a
fairly
straightforward
net.
Conservative
implementation.
That's
relatively
easy
to
use.
B
We
use
that
netcon
server
framework
to
implement
the
use,
the
part
of
the
model
that
can
be
implemented
on
a
standard
operation
system
right
now,
that's
basic,
basically,
the
connection
list
and
the
stats.
What
is
missing
right
now
is
the
tcpa
ao
configuration
simply
because
there's
no
open
source,
tcpo
limitation
right
now
that
we
could
use.
B
We
have
tested
that
both
on
linux,
as
well
as
on
qnx
and
the
letter
as
an
example
for
an
operating
system,
that's
more
typical
for
embedded
networking
hardware
and
we
hope
that
we
can
open
source
the
prototype
relatively
soonish
after
the
meeting.
It's
still
not
fully
finished,
but
once
we
have
it,
we
plan
to
release
it
so
that
everybody
can
have
a
look
at
it
and
play
with
it
set
during
these
implementation
efforts.
B
There
were
no
further
issues
identified
in
the
yang
module
so,
as
I
said,
the
parts
that
can
data
supported
by
operating
systems
that
we
have
access
to
work
in
the
young
module
and
that
they
can
be
implemented
in
the
last
two
slides.
I
give
I'd
like
to
give
a
brief
overview
of
how
the
implementation
looks
like
as
of
today.
B
So
first,
this
gives
an
overview
of
how
and
netcon
server
actually
looks
like
this
might
be
a
little
bit
an
unjudicial
diagram
in
tcpm,
because
we
typically
look
at
the
protocol
stack
itself
and
not
at
a
management
software,
that's
running
on
top
of
the
tcp
stack,
but
that's
on
the
typical
management
software
that
runs,
for
example,
on
a
router
on
an
embedded
networking
device.
B
The
system
configuration
can
be
changed
via
plugins
and
what
you're
obviously
doing
in
this
work
right
now
is
to
implement
one
such
plugin
for
the
game
module
that
I've
just
presented.
That
is,
we
basically
add
a
plugin
to
this
netconf
server
that
translates
the
netconf
commands
into
commands
towards
the
network
stack
as,
for
example,
netstat
commands
or
a
reading
from
the
clock
directory
or
whatever
is
needed
on
a
specific
operating
system
to
retrieve
the
parameters
of
the
tcp
stack.
Of
course,
this
plugin
is
operating
system
specific,
but
the
frontend
is
operating
system
independent.
B
So
that's
what
we've
done
so
far
as
I
said
it,
the
parts
that
are
available
on
the
operating
system,
linux
and
qnx.
D
F
The
question
I
guess
this
is
for
the
chairs,
based
on
the
fact
that
michael
presented,
we
sorry
as
a
contributor.
We
seem
to
have
resolve
all
the
outstanding
issues
and
also
have
an
implementation
that
that
has
not
identified
any
more
issues.
I
guess
my
question
to
the
chairs
is:
what
will
it
take
to
progress?
The
draft
to
its
last
quote.
A
B
Yeah,
at
least
from
my
side
as
the
whole
plan
to
open
source
the
software.
So
that's
something
that
I
think
will
hopefully
increase
the
credibility
of
that.
But
once
we've
dead,
I'm
not
aware
of
further
open
issues.
So
if
the
other
chairs
agree,
we
could
head
towards
working.
B
At
least
during
working
group
adoption,
there
were
rare
statements
that
the
support
of
adoption
is
conditional
to
the
fact
that
somebody
tries
to
implement
it.
So
at
least
we
had
statements
like
that
during
adoption,
and
I
mean
I
try
my
best
to
actually
try
to
come
up
with
an
implementation
with
the
limited
resources
that
I
have.
D
B
There's
one
there's
one
open
issue:
that's
the
availability
of
tcpao
as
open
source
and
that
won't
change
that
might
change.
Actually,
I've
heard
rumors
that
we
might
get
within
the
next
couple
of
months
that
there
are
open
source
activities,
but
I've
not
seen
anything
specific.
B
So
there
are
rumors
that
an
open
source
implementation
of
tcpo
might
come
up,
but
I
haven't
seen
it
so
far,
but
if
as
if
something
in
that
space
would
show
up,
that
would
obviously
allow
to
test
the
remaining
part
of
the
model.
F
F
A
A
G
My
point
of
view:
yeah:
we
adopt
this
draft
as
a
working
group
item.
That
means
no,
we
agree
to
publish
this
document
in
a
nutshell
and
then
so,
if
we
want
to
stop
it,
I
think
I
would
like
to
see
very
explicit
opinion
that
this
we
should
not
do
this
and
with
explicit
reason
other
than
that
I
don't
see
any.
G
A
So
the
question
then,
for
me,
is:
shall
we
wait
from
the
timeline
or
until
the
next
itf
or
something
for
making
a
working
oblast
call?
Just
I
mean.
B
I
mean
the
alternative
would
be
just
to
do.
The
working
group
last
call
very
soonish.
I
mean
there
have
also
been
comments
in
the
past
that
it
wastes
working
group
time
to
discuss
this
document.
B
The
sooner
we
get
get
it
off
the
table,
the
less
worker
book
time
is
consumed,
so
that
would
be
one
argument
to
do
a
working
couple.
Soonish
I
mean
the
big
question.
A
A
A
F
So
two
things
I
just
wanted
to
point
out:
one
is
kind
of
obvious
that,
of
course
the
last
call
would
have
to
be
made
on
the
mailing
list
anyway,
whether
it's
discussed
in
the
meeting
or
not,
it
has
to
be
very
verified
on
the
mailing
list.
The
second
point,
that
is,
that
I
think
there
is
going
to
be
a
misref,
even
if
you
do
go
ahead
with
publication,
so
it's
not
like
it'll
go
and
get
published,
because
there
is
a
dependency
on
the
netconf.
F
I
sure
we
can
definitely
try
to
coordinate.
I
I
can
let
you
know
when
we
are
ready
to
push
the
button
on
the
net
contract,
so
that
and
you
can
push
the
tcpm
about
the
same
thing.
B
B
We
import
the
importer
keeper
live
on
blueprints.
A
B
Okay,
let's
discuss
it
afterwards.
A
H
All
right,
michael.
G
H
H
It's
a
very
quick
recap,
essentially
there's
a
three
bit
field
using
the
header
flags
that
were
previously
used
for
ecn
or
it
overloads
them,
and
there's
also
a
optional
option
to
supplement
them.
H
That's
for
those
who
haven't
joined
this
group
before
now,
where
we,
where
we
are
with
this
on
implementation
and
testing
ilpo,
has
kept
the
implementation
up
to
date
with
the
linux
kernel.
H
Although
the
linux
kernel
has
gone
a
bit
forward
from
this,
but
he's
kept
it
up
to
date
with
the
latest
one
that
supports
bbr
or
bbr
so
because
he
wanted
to
make
sure
accurate
ecn
could
work
with
all
the
congestion
control
modules
which
it
now
does
so
the
great
thing
about
that
is
that
you
can
now
do
like
testing
of
different
congestion
controls
with
the
same
feedback.
H
So
you're
not
changing
two
things
at
once,
which
which
is
which
is
a
very
useful
feature
of
it,
and
it
also
means
it
proves
it
works
with
the
the
more
traditional
congestion
controls.
You
know
the
arena
the
cubic
and
so
on,
which
it
essentially
provides
a
a
superset
of
by
giving
all
the
congestion
information
you
get
from
ecn
and
then
just
giving
those
particular
congestion
controls
the
one
indication
per
round
trip
time
that
they
need.
H
And
just
to
mention,
there's
also
richard's
freebsd
implementation,
which
is
getting
a
bit
out
of
date
now
and
doesn't
include
the
tcp
option,
but
it's
would
be
good
if
someone
wanted
to
build
on
that.
That
would
be
great
in
previous
day.
H
So
now
we
have
been
spending
a
lot
of
time
and
we
means
myself
and
elpo
mostly
po.
Definitely
most
leopold,
testing
accurate,
ecm
with
different
levels
of
filtering,
and
that
testing
is
ongoing.
I'd
also
like
to
ask
if
anyone
here
would
has
the
resources
to
be
able
to.
H
You
know
some
of
the
larger
companies
to
do
more
wide
area
testing,
and
I
think
you
need
both
actually
test
bed
and
wide
area,
because,
whatever
you
find
the
the
odd
oddities
that
you
need
to
explain
and
testbed,
you
get
more
understanding
what's
going
on
and
so
on
the
test
bed
I
give
results
below
which
I'll
go
into
comparing
four
different
degrees
of
feedback
support
that
accurate
ecn
provides.
But
there
is
the
the
reason
you
can
get
more
understanding.
H
Is
that,
obviously
you
can
repeat
your
traffic
scenarios
and
and
what
I
came
to
having
things,
but
also
you
can,
by
building
the
models
of
the
hack
filters,
you
get
a
lot
better
understanding
of
what
you
do
if
you
just
tweak
those
models,
because
you've
got
to
also
bear
in
mind
that
new
act
filters
might
come
along,
and
so
you
you
want
to
sort
of,
try
and
stress
it
beyond
what
the
current
ones
are.
H
I
should
add
on
aqms
we
have
tested
it
with
both
on
off
aqms
and
ones.
That's
space,
ec
and
markings
more
because
that
gives
very
different
challenges
for
ag
filtering
and
found
some
interesting
possibilities
like
the
possibility
of
synchronization
between
an
act,
filter
and
the
round
trip
time
where
you've
got
blocks
of
on
and
then
blocks
of
off
in
in
round
trip
time.
Chunks
tends
to
get
you
different
results
so
that
works
on
going
the
act
filters.
H
We've
used
some
quite
aggressive
ones.
It
says
they're
up
to
one
in
34
packets,
which
is
actually
one
in
one
hack,
every
four
milliseconds
per
flow
and
ones
that
are
not
really
particularly
tcp.
Smart.
You
know
that
are
fairly
just
done,
decimation
things
and
been
based
on
that
building
heuristics
into
acura
ecn
to
to
deal
with
all
the
possibilities.
H
So
just
a
quick
summary
of
what
all
the
tests
are
and
all
the
results
are.
The
tests
on
the
left
there's
with
the
optional
options,
the
supplementary
options,
which
gives
you
effectively
ground
truth
it's
as
good
as
you
get
without
the
act
filtering
or
going
down
that
column.
We
have
a
minimum
option
which
is
not
not
always
putting
an
accurate
ecn
option
on
just
just
trying
to
conserve
resources,
because
you
know,
if
there's,
if
there's
minimal
space
using
the
rules
in
the
draft
and
then
there's
no
options
at
all.
H
All
of
those,
of
course,
have
the
three
bit
ace
field
as
well,
because
that's
in
the
ttp
header
and
then
we
also
have
a
another
case
for
interest
to
compare
if
we
had
dc
tcp
feedback.
Instead,
we've
implemented
that
for
testing
as
a
switch
in
the
code
so
with
without
really
accurate
ecn
ace
at
all,
just
using
the
one
bit
in
the
in
the
header,
but
still
with
the
negotiation
around
it.
H
So
results
generally
with
with
any
amount
of
options,
it's
nearly
as
good
as
you
get
with
outfit
filtering,
even
with
extreme
filtering,
with
no
option
at
all
it
and
just
the
ace
field.
H
It's
usually
good,
but
sometimes
we
do
get
poor
results,
but
that's
that
was
actually
before
we've
put
in
any
in
any
really
serious
heuristics
other
than
very
simple
cases
like
assume
the
worst
rap
and
I've
seen
the
best
rap
of
the
three
bit
field
and
on
the
on
the
dc
tcp
case,
it's
sometimes
good,
but
it's
reasonably
unpredictable.
H
H
So
this
was
a
about
the
possibility
of
acts
having
see
markings
on
them
and,
and
then
this
rule
saying
you,
you
acknowledge
every
n
c
e
marks
would
mean
you
may
end
up
acknowledging
an
act
or
hacking
an
ack,
and
so
this
table
states
where
we
came
to
on
the
on
the
list
where
originally
the
recommendation
was
every
two
data
packet.
H
Sorry,
not
every
two
data
packets,
every
two
data
packs
with
the
ce
mark
on
and
now,
if
you
come
to
a
packet
with
a
c
mark
and
you've
got
to
two
c
e
marks
and
it's
not
a
data
packet,
you
don't
emit
an
ack.
You
have
to
wait
until
you've
had
three
and
as
the
recommended
value,
but
up
to
six,
but
no
less
than
three.
H
So
you
don't
get
this
ping
pong
or
you
damp
out
the
ping
pong
very
very
quickly
and
the
other
point
that
yoshi
raised
about
that
on
detecting
duplicates.
We've
now
put
all
the
text
in
the
draft
about
that.
H
So
if
people
can
review
that,
that
would
be
useful
and
it's
essentially
what
was
agreed
on
the
list
that,
if,
if
you
want
to
test
for
dupe
when
you're
testing
for
a
duplicate
ack,
if
sac
has
been
negotiated
and
there's
no
sack
on
it,
don't
treat
it
as
a
duplicate
hack
or
if
you're
not
using
using
sec.
I
think
zach's
the
easiest
test
you
can
use
timestamps
and
and
just
check
that
the
thing
the
act
wasn't
sent
before
your
oldest
unactive
data.
H
And
finally,
if
you
don't
implement
that
test,
you
should
not
send
etnk
for
purex
in
the
first
place.
So
that's
all
in
the
draft
please
review
it.
H
H
It
was
meant
to
be
when
you,
when
you
process
the
burst,
you
then
decide
what
acts
to
send,
and
so
the
guideline
is
that
you
should
still
send
every
n,
but
if
it's
performance
critical,
you
know
if
it
could
harm
your
performance.
You
don't
have
to,
and
I
noticed
it's
very
similar
to
the
rules
that
are
in
the
quick
draft
as
well
on
the
similar
subject.
H
So
I
think
that's
all
that's
all
been
on
the
list
and
fairly
simple.
We
have
also
fairly
radically
altered
the
rules
on
when
you
send
the
tcp
option,
not
really
change
the
code.
Much
it's
just
the
it
wasn't
clear
that
the
main
rule
was
meant
to
be
send
sorry,
send
a
ttp
option
on
on
every
packet
that
acknowledges
new
data.
H
And
particularly
only
use
the
this
is
a
change,
only
use
the
two
fields
or
the
number
of
fields
that
have
ever
changed
in
the
connection,
so
don't
waste
times
repeating
a
field
that
has
never
changed
and
so
making
that
the
top
level
recommendation
and
then
the
other
recommendations
are
sort
of
cutting
back
from
that.
H
If
you
need,
if
you
need
some
space
for,
say,
sac
options
or
something
what
you
do
about
that
and
we've
removed
some
of
the
more
complex
requirements
on
beaconing
fields
and-
and
there
was
one
that
hadn't
realized
was
still
in
there-
that
should
have
taken
out
ages
ago
about
emitting
an
act
whenever
any
bike
counter
changes.
H
So
that
section
has
got
quite
a
bit
of
diff
text
in
it
and
would
be
useful
to
review
please
and
so
a
fun
slide.
H
There's
been
a
lot
of
text
tweaking
as
well
and
things
gory
more
more
changes
around
the
middle
boxes
and
distinction
between
normalizes
and
transparency,
also
not
in
the
draft
yet.
But
I've
got
it
ready
to
post
tomorrow.
H
A
look
through
of
the
draft
from
the
eyes
of
someone
who
wanted
to
do
a
minimal
implementation
say
for
a
limited.
You
know
resource
constrained
device
or
something
because
there's
talk
of
using
you
know,
jumping
to
this
for
thread,
or
you
know
iot
devices
just
checking
that
if
you
did
implement
all
the
masts
and
nothing
else
that
it
all
made
sense
and
and
in
doing
that
noticed
some
of
the
wording
around
some
of
the
musts.
H
There
were
some
duplicates
and
things
like
that
so
tidy
that
up
a
bit
so
ready
for
working
group
last
call
except
we're
waiting
for
well,
we've
had
a
security
directorate
review,
but
I
think
the
idea
was
that
we
were
gonna
get
another
one,
so
sort
of
still
waiting
for
that,
and
I
don't
suppose
that
holds
up
a
working
group.
H
That's
cool,
but
the
other
other
thing
that's
going
on
is
obviously
the
act,
filter,
testing
and
just
to
point
out
that
the
generalized
dcn
draft
that's
not
being
presented
today.
It's
it's
sitting
there.
It's
not!
You
know
it's
it's
ready,
but
it
depends
on
this
draft.
So
that's
why
it's
sitting
there
and
no
need
to
present
it,
because
it's
just
ready!
A
C
Yeah
my
question.
J
C
A
choice
here
whether
to
even
have
plural
skin
mark
my
understanding
is
today,
acts
don't
count
towards
congestion
control.
So
what
is
the
benefit
of
making
florex
is.
C
To
deal
with
for
this
complexity,
okay,
duplicate
attacks.
H
Right
this
this,
this
draft
doesn't
require
you
to
have
eating
capable
acts,
but
the
whole
way
that
the
draft
has
been
written
is
that
it
has
to
deal
with
anything
that
comes
in
it's
it's.
H
What
a
receiver
does,
and
so,
if
purex
become
the
thing
in
the
future,
sorry
ecn
capable
prx
become
the
thing
it's
got
to
be
able
to
deal
with
them,
so
it
just
says
what
what
a
receiver
does
if
it
gets
one
of
those
sort
of
acts,
and
so
we
have
to
put
all
the
machinery
in
in
case
those
acts
come
in
and
indeed
slightly
more
than
that
because
of
the
generalized
ecn
draft
recommending
that
you
put
econ
on
axe,
then
that's
yeah,
and
it's
something
that
a
number
of
people
want
to
do
so
I
I
detected
a
site
willingness
to
or
need
for
care
on,
the
complexity
of
that.
H
Obviously
it
protects
them
from
getting
lost,
but
it
does
mean
you
do
have
a
a
better
idea
of
congestion
on
the
axe
stream.
H
If
you
need
it,
and
so
in
future,
just
unilaterally,
if
you
like
a
a
asymmetric
data
center,
could
then
then
see
what
congestion
was
coming
in
back
to
it
on
the
axe
stream
and
it
could
you
know
if
there
was
something
like
carlos
gomez's.
You
know
at
control
thing.
He
could
then
ask
the
receiver
to
in
the
act
stream,
but
that's
all
it's
it's
it's
just
there,
because
you've
got
these
acts
coming
in
and
they
might
be
easy
and
capable.
H
So
you've
got
to
be
able
to
deal
with
that
anyway
in
your
code,
because
it's
it's
a
possibility
on
in
the
headers,
so
you
have
to
do
something
about
it
anyway,.
C
Yes,
it
answers
the
question
bob,
just
one
follow-up,
so
what
happens
if
the
receiver
does
not
send
an
act
back
in
response
of
mark
your
axe.
H
No,
nothing
happens,
it's
this
is
only
actually,
it
says,
must
emit
a
hack
once
nc
marks
have
arrived
since
the
previous
act
yeah.
So
it's
a
must,
but
if
it
if
the
implementation
doesn't
comply
with
it
and
obviously
even
if
it
does,
the
act
can
get
lost,
nothing
happens,
you
know
it's,
it's
no
acts
get
lost
and
no
one
knows
they
get
lost.
Now
you
know
it's,
it's
not
a
the
sky's
not
going
to
fall.
K
Since
we're
on
this
slide,
I
might
as
well
just
ask
my
first
question
on
this.
Slide
is
when
we
said,
must
emit
and
act
once
and
see.
Mark
have
arrived
so
in
the
world
of
like
a
gro.
Let's
say
you
receive
the
tcp
receive
a
jumbo
packet
right
after
gro
of,
say:
16
mtu,
size
packet
and
they
all
have
ce
marks.
H
H
Well,
this
is
what
this
this
slide
said
that
you
you
deal
with,
that
you
take
in
the
the
super
segment
and
then
count
everything
up,
that
that
would
be
considered
to
be
nc
marks,
but
the
there
is
this
get
out
clause
at
the
end
that
if,
if
you
haven't
got
resources
to
send
a
knack,
every
n
packets
you
don't
have
to.
H
However,
as
I
understand
it
from
the
answer,
neil
gave
on
the
list
to
this
question
that
I
asked
dc
tcp
does
when
it
goes
through
one
of
those
super
segments.
It
does
send
out
a
whole
load
of
acts
back
to
back
to
keep
to
its
rules
of
how
many
you
know
of
marking,
or
of
acting
every
two
ce
marks,
or
acting
any
transitions
from
or
two
c
and
not
c.
K
Okay,
that's
not
my
understanding
of
how
what
the
linux
dctcp
does.
It
doesn't
send
a
bunch
of
acts
or
like
it.
Okay
purposely
send
extra
ads
just
to
create
more
sort
of
ce
acts.
H
Right,
okay,
well,
I
mean
neil
said
it
did,
but
maybe
you
know
you
could
correct
him
or
talk
with
each
other
and
see,
which
is
the
right
answer,
because
I'd
be
interested
to
know.
But
the
the
point
of
this
one
is
of
accurately
saying:
is
you
don't
have
to
and
also
it's
sort
of
less
critical
that
you
do
because
you've
got
you're
sending
a
counter?
H
So
if
you
and
for
a
start,
you
haven't
got
to
send
every
two
n:
isn't
two:
it's
it's
six
and,
and
you
know
so,
you
you
you've,
got
more
scope
for
not
sending
so
many
acts
and
obviously,
if
you're
sending
the
option
as
well.
H
K
Okay,
we
can
resolve
more
offline.
My
second
question
is
to
your
that
comparison
study
the
with
the
minimum
option
and
also
there
is
the
dc
tcp
fb
mass.
What
what
does
fb
stands
for
in
that
dc?
Oh.
H
K
Oh
okay,
I
saw
it's.
Facebook
dcdc.
K
Like
that,
but
for
the
the
no
option
you
mentioned
some
poor
scenarios,
will
there
be
some
tech
report
published.
So
we
can
see
like
more
details
of
what?
What
are
those
poor
scenarios.
H
Yeah
yeah
yeah.
Certainly
we
want
to
make
sure
that
we
do
it
all
properly
yeah
and,
as
I
mentioned
in
the
when
I
gave
the
slide
where
wide
area
testing
is
also
needed.
You
know
if
you
guys
wanted
to
do
a
bit
of
testing
of
this
as
well.
We've
got
the
code
now,
but
you
can
switch
in
dcp
feedback
or
and
and
compare
things
and
all
the
rest
of
it
and
and
compare
different
congestion
controllers.
H
So
you
know
it's
all
in
linux:
okay,
right
in
five,
five,
ten.
K
Okay,
so
you
mean
this
scenario
pools
you
know
are
already
documented
in
the
current
draft
or
there
will
be
no.
H
No,
no
they're,
not
no,
and
probably
given,
given
the
difficulties
of
putting
graphs
in
ascii
we
would.
We
would
do
that
in
some
separate
technical
report
and
then
refer
to
it.
H
H
Is
you
put
an
option
on
every
ack
that
that
acknowledges
new
data,
and
then
it
says
if
you
haven't
got
space
for
some
reason
you
can
that
you've
got
to
follow
these
rules
and
it
gives
some
rules
as
to
what
best
to
do
and
that's
a
so
that
minimum
option
is
following
those
rules
without
putting
on
every
hack
it
it's
what
the
minimum
rules
are
and
then
no
opt
is
no
options
at
all.
K
So,
in
your
opinion,
if
we
replace
the
dc-tcp
inside
the
data
center
with
the
accurate
ecm
without
squeeze
the
note
option,
route,
yeah
use
e
for
see
like
big
issues
coming
up
like
this.
Well.
H
H
Oh
well,
that
you
know
like
like
wi-fi,
for
instance,
or
you
know
that
that
coalesce
is
a
load
of
acts
or
docsis
that
that
takes
out.
You
know
per
flow
each
time
it
does
a
request,
grant
every
four
milliseconds
or
two
milliseconds
it.
It
only
sends
one
act
for
each
flow
rather
than,
and
so
it
coalesces
up
all
the
acts.
K
So
air
coalescing
has
happened
naturally,
because
of
gro,
so
it's
extremely
common
to
see
an
act
of
acting
64
kilobytes
very
common,
and
it's
actually
going
to
be
bigger.
If
you
see
eric's
presentation
in
the
last
net,
dev
called
big
tcp,
so
yeah
we
are
going
to
see
basically
high
air
compression-
yes,
but
not
through
the
typical
dark
six
mechanism,
but
just
naturally
happening
with
gr
yeah.
H
Yeah,
so
I
mean
I,
I
would
say
that
if
you
get
once
you
go
to
really
high
correlating
filtering
whatever
suppression
you
want
to
call
it,
then
I
I
would
suggest
you're
putting
the
option
on
on
the
packets,
because
this
three-bit
counter
is
just
going
to
become
a
bit
sort
of.
You
know
it's
going
to
spin
round
multiple
times.
H
If
you
get
100
marking,
you
know
so
it's,
but
you
know,
we've
we've
devel,
we
have
developed
and
we
are
developing
heuristics
to
try
and
work
out
what
that
counter
might
mean,
even
though
it
wraps,
but
I
you
know,
I
don't
know
I've
heard
numbers
of
44
that
might
cope
with
that,
but
getting
higher
than
that.
You
know,
I
think
you
need.
H
C
I
So
I
just
wanted
to
go
back
to
the
question
of
praveen
around
the
x4x
again,
so
one
area
where
this
could
be
relevant
and
I
believe
we
have
discussed
this-
is
when
you
have
a
data
stream
or
when
you
have
a
session
where
the
data
flow
is
changing
direction
on
a
regular
basis.
I
However,
infrequently
enough
that
you
may
get
close
to
the
idle
period,
so
using
that
mechanism
with
the
with
the
axe
still
delivering
fresh
ce
information,
you
can
keep
them
say
in
the
in
the
data
center
tcp
the
alpha
value
up
to
date,
and
the
other
comment
I
wanted
to
make
on
this
is
that
that
was
a
discussion
that
we've
had
around
the
iot
space.
I
A
sender
is
not
required,
as
of
yet
to
have
the
at
the
pure
x
marked
as
ect,
capable
ecn
capable
so
meaning
the
the
vote
problem
unquote
of
c
of
x
to
of
x
to
c
e
marked
x,
which
in
turn
may
be
c
e
marked,
may
not
be
all
that
of
a
big
issue
if
one
or
the
other
side
of
of
a
data
transmission
do
not
mark
the
pure
acts
as
ecn
capable
in
the
first
place,
but
that
was
just
a
clarification.
I
I
hope
that
that
this
further
clarifies
where
this
comes
in,
but
it
really
is
what
bob
explained.
The
draft
should
be
making
sure
that
the
receiver's
behavior
the
receiver
reflection
of
received
c's
are
are
quite
well
defined
so
that
the
sender
can
rely
on
those
on
those
feedbacks
hope
that
makes
sense.
B
B
Specifically
and
now,
if
I
read
the
slides
correctly,
we
have
a
should
in
there
and
that
the
option
should
be
used
relatively
frequently
and
as
far
as
I
understand,
that's
a
change
as
compared
to
what
we
did
two
to
three
years
ago,
for
example,
and
my
my
question
more
to
the
group
is,
if
actually,
the
opinion
in
the
group
has
changed
and
that
the
egg,
using
the
option
in
some
cases,
for
example,
because
of
egg
filtering,
doesn't
do
harm.
H
Okay,
before
people
answer
that,
can
I
just
clarify
that
the
draft
says
if
you're
going
to
use
the
option,
you
should
use
it
on
every
packet
by
default,
but
if
not
you
could.
These
are
the
rules
for
how
you
don't
so
it's
it's
only
if
you
are
using
it.
B
H
H
H
Gets
stronger
than
before
it
it,
I,
I
would
say
the
recommendation
is
stronger
because
of
act
filtering
because
I
mean
this.
This
draft
has
been
around
long
enough
that
when
it
first
started
there
wasn't
really
much.
You
know
there
was
a
bit,
but
it
was
nothing
like
as
aggressive
as
it
is
nowadays.
I
So
to
answer
that
question,
I
believe
I
still
believe
that
for
constrained
environments,
so
basically
those
that
youtube
had
in
mind
not
using
the
option
is
quite
viable
and
I
believe
the
data
that
we've
collected
so
far
also
shows
that,
because
one
aspect
that
may
be
lost
over
here,
a
little
bit
is
that
most
of
these
extreme
scenarios
assume
long
continuous
runs
of
ce
marked
packets,
and
that
is
not
necessarily
the
case,
depending
on
what
kind
of
eqm
is
deployed
right.
I
So,
for
example,
the
the
steady
state
mechanism
for
data
center
tcp
or
data
center
tcp's
feedback
loop
with
the
c
marking
on
a
switch
is
typically
only
two
marks
per
rtt,
meaning
you
don't
tend
to
get
overly
excessive
runs
of
of
packets,
all
of
them
ce
marked.
So
the
the
recommendation
obviously
has
to
cater
for
this
in
the
draft,
but
I
believe
that
many
environments
exist
where
you
can
get
sufficiently
accurate
feedback
without
the
option.
Nevertheless,.
K
I
think
the
the
option
itself
is
not
the
problem,
it's
really
the
comparability
with
gro
and
tso
or
generally
segment
of
flow,
and
I
think
the
trend
is
just
that
as
the
next
speed
get
pushed
higher
and
higher
right,
it's
growing
so
quickly.
We
are
trying
to
batch
even
bigger
aggregations.
K
That's
what
the
big
tcp
is
about.
The
64
kilobyte
aggregation
is
already
running
tight
on
a
saving
cpu
cycles.
So
really
how
ace
is
or
accurate
ecn
is
compatible
to
the
gro
and
tso
is
really
the
key
question.
Yeah.
H
Yeah
and-
and
I
think
we
need
to
have
a
bit
of
time
together
to
talk
about
that
specifically,
but
also.
H
Having
having
the
ability
to
compare
it
with
dc,
tcp
is
useful,
and-
and
I
guess
the
the
biggest
point
here
really
is
the
ecn
itself
is-
is
the
problem
against
gso
and
gro
and
and
all
the
rest
of
it,
because
it's
inherently
all
about
changing
from
ones
to
naughts,
which,
which
is
something
that
you
know
offload,
finds
difficult,
having
everything
changing
all
the
time.
Rather
than
being
the
same,
you
know
or
changing
in
in
a
way
that
has
has
significance
that
you
have
to
pick.
C
Got
a
comment
on
this
option
support
so
I
think
within
the
data
center,
this
is
reasonable,
but
on.
C
We
typically
see
middle
box
issues.
The
last
time
we
tried
to
load
out
a
new
option
with
all
kinds
of
interesting
metal
box
behavior
with
tfo.
So
I
guess
the
question
is,
you
know:
is
there
any
experiments
that
have
been
done
at
internet
scale
to
see
what
kind
of
bad
metal
box
behaviors
would
occur
as
a
result
of
rolling
such
an
option
out,
and
whether
the
draft
should
address
any
sort
of
like
strategy
for
an
end
point
to
deal
with
bad
metal
box
behavior?
I
guess
that
would
be.
H
H
I
I
I
will
tell
you
this
draft
would
be
a
few
pages
if
it
wasn't
for
all
the
stuff
in
there
about
dealing
with
having
to
deal
with
middle
boxes
and
and
all
the
tests
for
doing
it.
B
Yeah
and
by
the
way,
that's
correct
as
a
the
draft
discusses
middle
boxes
quite
a
bit
so
that
that's
well
covered
so.
H
C
I
was
going
to
say
that
you
know
it's
not
just
about
a
single
connection.
You
know
we've
seen
pathological
behavior,
where
one
connections
attempt
to
use
an
option
resulted
in
subsequent
connections,
so
that
that
you
know
the
main
concern
so
within
a.
H
Yeah,
I
think
we
we
definitely
need
to
move
into
that
sort
of
phase
of
of
testing
this
now
you
know
at
larger
scale
and
any
you
know
if
you
want
to
help
with
that
praveen.
That
would
be.
H
H
A
If
not,
then
thanks
for
the
presentation
and
the
discussion
next
is
an
update
on
cubic
by
vidi.
D
D
D
For
the
aimd
window
that
cubic
uses
for
low
pdp
environments,
the
time
based
approach
doesn't
seem
to
work.
Well
with
you
know,
certain
rate
limited
senders,
as
the
condition
will
do
will
continue
to
grow,
even
when
the
app
is
not
sending
a
lot
of
data.
So
we
have
changed
this
in
the
revised
draft
to
just
do
whatever
new
reno
does
you
know,
based
on
the
ad
clocking
and
the
fights
acknowledged
or
the
segments
acknowledged,
because
we
made
this
change,
we
thought
there
would.
D
There
were
a
lot
of
good
feedback.
There
was
a
lot
of
good
feedback
and
comments,
and
thanks
to
yoshi
and
michael
and
others
who
reviewed
the
draft
thoroughly
and
provided
great
comments,
and
we
have
updated
these
changes
in
the
latest
release
draft,
which
was,
I
think
published,
which
was,
I
think,
out
there
on
on
last
monday.
D
The
original
rc
was
written
based
on
a
paper
published
in
2008
and
does
the
general
theme
was
that
cubic
is
an
extension
of
traditional
tcp
standards.
But
this
has
changed
over
the
years
right
and
it
it
kind
of
is
obsolete.
To
say
cubic
is
an
extension,
so
we
have
changed.
The
text
to
make
it
sound
like
cubic
is,
has
become
the
most
deployed,
conduction
control
over
the
internet
so
and
we
have
updated
the
draft
so
that
it
sounds
like.
D
There
was
another
confusion
that
yoshi
pointed
out.
Most
of
these
issues
were
pointed
out
by
yoshi.
So
thank
you
yoshi
again
for
doing
the
thorough
review.
D
So
we
have
added
guidance
for
implementations
to
be
able
to
use
files
acknowledged
if
that's
preferable
and
additionally,
as
segments
act,
need
to
be
mss
size.
We
have
clarified
that
this
number
is
a
decimal
value
and
it
can
also
be
you
know
less
than
one
when
you
receive
at
less
than
when
you
receive
x
for
packets,
less
than
mss
bytes.
D
And
excuse
me,
in
addition
to
packet
loss
detected
by
tupac's
and
ece
signals,
we
have
added
loss
events
detected
by
right,
tlp
and
quick
loss
detection
mechanism
which
are
common
in
these.
Today's
implementations
of
tcp
and
quick
last,
but
not
the
least
yoshi
found
a
very
interesting
interpretation
of
convicts
region.
D
So
this
in
this
diagram,
if
you
see
right
after
we
reach
w
max
the
aim
d
friendly
window
is
being
used
because
it's
higher
than
the
increase
by
the
cubic
curve
or
the
convex
region,
but
only
after
a
point
the
convict
profile
takes
over.
So
this
means
that
when
the
convict
profile
takes
over,
it's
not
really
that
slow.
D
D
There
are
more
editorial
changes,
so
feel
free
to
take
a
look,
and
let
us
know
if,
if
everything
looks
good
or
if
we
messed
up
anything,
we
have
addressed
all
the
last
cool
con.
Last
call
comment
so
far
and
the
latest
draft,
which
is
version
3,
was
published
on
monday.
D
A
Thanks
for
the
presentation,
one
reason
for
this
presentation
was
to
draw
the
attention
of
the
working
group
to
this
document.
Before
concluding
the
working
group
last
call
to
make
sure
that
there
are
no
late
comments.
C
Yeah,
I
have
a
question
actually
so
compared
to
the
original
rsv
and
the
best
rfc
as
an
implementation
that
makes
these
changes.
D
D
Very
good
question,
so
the
answer
is
yes,
and
no.
No,
because
linux
already
has
all
of
these
things,
and
it's
kind
of
you
know
learned
what
we
learned
from
these
implementations.
We
have
described
them
in
the
draft
so
for
linux,
probably
no
for
other
implementations,
especially
for
quick.
D
There
would
be
performance
benefits
if
you
missed
out
or
if
you
followed
the
old
cubic
rfc
word
to
work,
so
you
will
see
performance
benefits
and
I
think
there
were
some
updates,
perhaps
from
quiche
developers,
or
I
might
be
remembering
it
wrong.
But
you
will
see
the
performance
benefits
if
you
follow
the
current
rs.
The
current
draft
as
compared
to
the
cubic
rc
for
new.
C
D
K
You
I
just
want
to
reflect
the
part
about
the
abc
so
for
the
linux
implementation.
Linux
does
not
do
abc
in
a
way
that
strictly
abc
so
in
linux,
either
cubic
or
new
wiener
or
any
condition
control.
K
When
we
receive
an
app
that
acts,
you
know
100
mpu
size
packet
say
it's
in
slow
start
literally,
the
c1
will
grow
by
100
mq
size
worth
of
seaweed
packet,
so
it
does
not
increase
the
packet
up
to
only
two
packets,
and
I
think
it's
similar
to
the
question
I
asked
parveen
early
on.
Did
you
use
two
or
you
use
a
because
the
fact
is
that
ad
filtering
is
everywhere
and
ag
aggregation
is
continue
to
go
up
and
up
right.
K
The
stretch
act,
but
because
that's
needed
for
high
speed
networking.
So
I
want
to
ask
if
we
still
want
to
stick
with
like
okay
cubic
should
use
the
abc
standard,
which
is
not
really
what
a
very
common
implementation
use,
or
we
should
trying
to
change
that
part
to
reflect
what
the
code.
D
Yeah,
that's
a
good
point,
I
think
so
right
now
we
currently.
Maybe
you
can
look
at
a
draft,
but
right
now
we
say
that
the
equation
that
we
use
is
when
abc
is
disabled
and
if
you
enable
abc,
then
this
should
be
adjusted
for
acknowledged
bytes,
which
means
really,
like
I
mean
what
is
abc
abc,
doesn't
mean
that
you
always
have
to
use.
2
l
equals
to
2..
D
You
use
your
bicep
knowledge,
but
you
have
to
put
a
maximum
limit
and
that's
not
a
must,
if
I
remember
abc
correctly,
as
kind
of
a
should
for
the
times.
You
know
when
people
were
acknowledging
two
segments
at
a
time
and
definitely
that
has
changed
to
multiple
segments
per
act
so
still
using
the
number
of
bicep
knowledge.
The
first
you
know
the
main
theme
of
abc
is
you
can
divides,
still
makes
sense
right,
even
if
you
don't
follow
the
l
equals
to
two.
K
Yeah,
I
think
it's
the
cap
that
I'm
talking
about,
so
I
don't
know
if
I
know
what
our
wording-
the
cab
may
not
necessarily
be
sort
of
pragmatic
anymore,
but
yeah
just
want
to
throw
this
out.
D
D
Yeah,
the
idea
of
abc
is
the
right
idea
that
you
count
the
bytes
it's
just.
The
cap
was
wasn't
recommended,
based
on
the
what
is
in
practice
right
now.
A
I
Around
the
abc,
so
if
I
remember
it
correctly,
at
least
in
the
detected
iron
that
I
know
abc
is
during
slow
start
mostly
used
to
prevent
x-splitting
attacks,
and
I
believe
you
go
up
to
twice
the
number
of
acknowledged
bytes
per
per
act
as
a
maximum,
so
that
doesn't
necessarily
mean
two
mtu
sized
packets,
so
it
can
even
be
higher
than
that.
But
obviously
that's
implementation's,
pretty
goofy.
A
D
A
Update
this-
but
I
guess
martin
comes
in
for
that,
so
you
want
to
test
that
martin,
okay,.
E
Yeah,
so
3465
is
experimental
and
5681
has
abc
limit
of
one,
and
that
is
actually
the
standard
that
allows
knight
counting
because
of
high
start
for
a
while.
We
were
talking
about
making
3465
the
standard.
I
don't
that
seems
to
have
all
that
seems
to
have
dropped,
but
with
5681
having
the
concept.
That's
fine!
E
If
the
word
group
thinks
that
guidance
is
just
obsolete
at
the
risk
of
scope
creep,
we
could
roll
that
into
this.
K
I
think
maybe
it's
time
to
look
at
if
we
should
update
abc
about
that
cap.
It's
because
we
indeed
find
performance
issue
with
cubic
with
that.
You
know
abc
cap.
That's
why
we
actually
patch
up
the
linux
cubic
to
deal
with
all
the
stretch,
act,
issues.
D
K
D
Yeah-
and
I
would
like
to
add
another
thing
just
because
we're
talking
about
it,
I
think
the
whole
pacing
thing
is
missing
from
that,
so
stretch
out
can
be
used
to
increase
the
congestion
window,
but
then
that,
if
not
used
with
pacing
could
cause
a
lot
of
burst.
K
Piecing,
yes,
I
agree.
That's
a
great
point
to
have
yeah
it's
pros
and
cons.
We
should
both
document
both
as
well
yeah.
D
C
Yeah,
I
would
agree
that
you
know
we
should
possibly
update
abc,
not
really
have
this
draft
talk
about
changing
abc,
I
think
updating
abc
makes
more
sense
because,
for
example,
even
the
high
start
draft
that
I
presented
earlier
today
refers
to.
F
E
Do
this
that's
great,
I
will
say
that,
having
communicated
with
mark
allman
a
few
times
and
he's
the
author
of
3465,
that
if
somebody
like
would
write
a
draft
zero
zero
that
would
probably
move
the
ball
along
a
lot
faster
than
mark
will,
so
I
don't
think
we
should.
I
think
we
should
involve
him,
but
I
don't
think
we
should
wait
for
him
to
make
the
first
move.
D
Okay,
yeah
yeah,
perhaps
someone
some
one
of
us
could
reach
out.
I
think
I
did
reach
out
to
him
at
one
point
of
time.
I
was
talking
thinking
about
abc
and
I
don't
remember
where
that
threat
went,
but
looking
at
all
these
developments
and
condition
control
algorithms
like
I
started
cubic
and
getting
them
proposed
as
a
proposed
standard,
then
we
should
update
all
the
references,
especially
that
relates
to
fight
counting
yep.
We
can
start
a
thread
I
can.
A
G
Yeah,
I
just
clarify
one
thing:
you
know,
even
though
you
know
we
are
thinking
about
updating
abc.
This
should
not
block
this
draft
right.
You
know
this
graph
is
already,
you
know,
conclude
almost
concluded
repressible,
so
it's
almost
there,
so
we
don't
have
to
wait
until
abc
is,
I
guess.
E
Let
me
say
ship
it,
regardless
of
the
status
of
465..
Currently
it's
in
the
form,
so
3465
is
an
informative
reference
that
we
keep
it
trapped,
which
I
have
not
spent
a
lot
of
time.
Thinking
about,
but
seems
a
little
odd
to
me.
If
it
were
to
become
normative,
then
we
would
have.
We
would
be
blocking
rc
ed
forever,
which
is
not
ideal,
but
it's
the
question
what
you
think
you
can
incredibly
claim
in
terms
of
whether
or
not
it's
normal
for
performative.
D
D
E
5681
has
bike
counting,
but
with
a
one
mss
limit
so
like
in
terms
of
normative,
informative,
like
that
is
a
safe
place
to
go
to
just
get
the
concept
of
bike
counting,
and
I
don't
see
any
advantage
in
referencing
3465,
which
gives
the
two
bite
two
packet
limit,
which
is
not
actually
any
better
or
better.
But
it's
not
sufficient.
D
Are
you
suggesting,
so
let
me
just
double
check:
are
you
suggesting
to
remove
abc
reference
and
use
56.81.
E
So
I
I
have
to
do
some
more
thinking
about
whether
or
not
it
needs
to
be
an
informative,
normative
reference,
and
if
it
is
in,
if
it
is
a
normative
reference
and
you
map
it
to
an
experimental,
then
you
are
stuck
in
the
rc
editor
until
3465
gets
updated
or
there's
some
extent,
there's
more
bureaucracy.
E
Let's
put
it
one
way.
5681
is,
of
course
standard-
and
I
guess
my
point
is
that
if
you're,
just
looking
for
a
reference
for
the
concept
of
bike,
county
exists
in
the
standard
with
anyone.
The
only
reason
to
reference
3465
formatively
is
to
get
the
two
packet
limit,
which
is
that
not
all
that
useful
to
me.
D
D
Yeah
I
mean
the
process.
Stuff
also
has
to
be
taken
care
of,
because
it's
part
of
the
process
so
yoshi
are
you?
Do
you
thinking?
Are
you
thinking
it's
okay
to
remove
the
abc
mention
that
we
have
added
to
the
draft
based
on
your
recent
comment
or
whatever.
D
G
Yeah,
I
think
it's
okay
to
remove
it
if
you
want,
but
I
think
you
know
cubic,
can
work
without
abc
and
then
so
that
means
you
know
it
doesn't
have
to
be
innovative
reference
you
can,
it
can
be
in,
but
you
can
put
in
an
informative
informational
reference.
That's
one
way.
D
B
L
Yeah
she
said
what
I
was
going
to
say
that
I
think
3465
is:
is
informative,
informatively
referenced
pretty
much
by
many
tcp
standards
because
of
that-
and
I
don't
see
a
reason
to
normatively
reference
it
here.
L
D
Yeah,
so
I
just
double
checked
so
thanks
lars
it's
currently
is
informative,
so
I
don't
think
we
need
any
change,
so
the
draft
that
was
published
and
that
it's
currently
an
import,
informative.