►
From YouTube: IETF112-TCPM-20211111-1200
Description
TCPM meeting session at IETF112
2021/11/11 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
Okay,
then,
let's
start
this
is
tcpm
tcp
maintenance.
My
name
is
michael
dixon,
I'm
one
of
three
coaches,
the
other
ones,
are
yoshi
and
michael,
and
this
is
the
note
well,
you
should
have
seen
it
if
you
attended
another
meeting.
So
please
read
it
and.
A
Do
what
it
says
we
already
have
a
note
taker
thanks
to
gary
who
volunteered
again
and
michael,
is
a
java
scribe.
So
if
you
want
anything
reflected
at
the
mic,
I'll
put
in
a
prefix
mic
and
I
think
yoshi
will
do
the
job
in
case
michael
is
presenting.
A
A
A
One
is
on
proportional
rate
reduction,
which
is
from
yuchang,
which
is
proxy
or
who's
proxy
by
me.
Today
cubic
an
update,
an
update
on
yang
two
presentations
on
tcp
ao,
one
about
the
draft
and
one
about
an
interrupt
test
and
finally,
on
the
working
group
documents
on
tcp,
edo
and
eos,
where
we
have
a
slide
from
joe,
and
we
are
also
proxying
it.
After
that,
we
have
two
three
more
presentations,
two
about
upcoming
biz
documents
and
one
about
an
implementation.
A
If
that's
not
the
case,
then
this
is
the
are
the
status
of
the
documents
we
have.
Rfc
is
793
bis
and
that
is
after
isj
last
call
a
revised
id
is
needed
and
wes
is,
I
think
there
and
can
give
us
an
update.
C
Yes,
the
isg
did
a
really
good
job
reviewing
it
and
there
are
a
lot
of
comments.
I've
worked
through
some
of
them,
but
I've
looked
at
all
of
them.
I've
actually
made
edits
for
a
good
number
of
them,
but
there
are
a
few
that
I
had
put
off
because
they
involved
more
research
or
homework.
C
A
good
example
is
there
was
a
question
about
how
we
treat
source
route
options
and
whether
what
this
document
says
is
is
totally
consistent
with
what
other
rfc
is
not
about
tcp,
but
about
ip
options
say,
and
so
I
wanted
to
make
sure
that
we
did
the
right
thing
there
and
also
do
something
consistent
with
what
real
stacks
do.
C
So
I've
been
sort
of
dragging
my
feet
and
taking
more
time
to
get
around
to
doing
the
right
thing
and
looking
up
a
lot
more
information
on
that
one,
and
there
are
a
couple
of
other
issues
like
that.
But
for
the
most
part,
the
isg
comments
are
pretty
clear
and
I
think
it
just
needs
my
attention
to
get
through
them.
C
C
I
was
going
to
do
that
on
some
of
these
questions.
I
think
that
will
be
much
quicker
and
lead
to
better
results,
so
I
will
do
that.
A
If
that's
not
the
case,
we
can
go
through
the
list
of
documents.
They
are
in
the
order
of
the
milestones
which
are
often
in
tcpm,
not
taken
that
precisely
so
we
have
prr.
I
have
a
status
update
on
that.
A
In
the
first
presentation
we
have
eight
three
twelve
bis
where
we,
where
we
have
also
a
presentation.
A
Said
that
he
wants
to
do
so,
he
has
implemented
what
is
in
the
current
document
and
microsoft
is
wants
to
do
some
more
measurements
and
present
that
the
next
ietf
and
based
on
the
outcome
of
the
measurements,
the
document
should
be
in
a
in
a
pretty
good
shape
or
not.
But
that
depends
on
the
on
the
outcome
of
the
measurements.
So
that's
why
we
don't
see
any
new
information
or-
and
we
don't
see
a
presentation
here-
accurate
ecn
and
generalized
ecm-
are
both
in
the
same
state
as
last
time,
because.
A
A
B
This
may
be
a
good
opportunity
for
a
personal
announcement
that
this
meeting
is
most
likely.
My
last
meeting
at
the
chair
desk.
So
I've
decided
to
offer
to
step
down
to
open
room
for
the
next
generation
of
tcp
researchers
so
that
they
can
get
management
experience
the
transit
that
transition
will
not
immediately
take
place
because
since
I
still
want
to
wrap
up
some
work,
the
most
important
one
is
the
top
one
on
this
list
here.
B
B
A
Yeah,
thank
you
and
thank
you
for
all
the
work
you
did
for
tcpm
and
I
hope
we
will
still
have
you
in
the
working
group
as
a
contributor
and
reviewer
and
a
person
that
can
share
his
knowledgeable
tcp.
A
A
So
this
is
regarding
propulsion
rate
reduction.
The
draft
is,
I
think,
not
active
anymore.
A
We
contacted
the
the
authors
and
basically
they
are
pretty
busy
with
other
things,
but
they
they
stated
that
they
will
provide
a
new
version
on
in
the
time
scale
of
the
next
ietf,
and
this
is
the
list
of
changes
they
currently
have
in
mind,
which
need
to
be
addressed
before
the
next
version,
which
is
two
bug
fixes
and
some
clarifications,
especially
on.
A
Interactions
between
prr
and
reg,
because
at
the
point
of
time
where
prr
was
written,
rec
didn't
exist.
So
there
are
some
some
adoptions
also
based
on
newer
developments
on
the
documents.
Are
there
any
comments?
Are
there
any
additional
things
the
author
should
address?
So
they
said
they
will
read
the
notes
and
any
feedback
will
be
addressed.
A
E
Hello,
I
just
curious
about
implementation
status
of
this
one.
It
is
already
main
running
in
the
corner.
Why
not?
F
I'm
not
sure
if
any
of
the
draft
authors
are
on
the
call,
but
I'm
in
close
communication
with
them
and
the
I
believe
the
this
is
going
to
basically
reflect
the
latest
linux
tcp
prr
there
have
been
a
number
of
minor
changes
and
fixes.
I
guess
changes
really
not
fixes
over
the
years
and
we
wanted
to
update
the
documents
or
reflect
the
experience
in
the
linux
tcp
stack,
so
it
will
cover
logic.
That's
implemented
in
linux,
tcp.
G
H
So
I
just
wanted
to
add
on
this
that
I've
provided
some
of
the
feedback
based
on
my
experience
of
prr
in
the
freebsd
stack.
So
this
has
not
yet
been
fully
committed,
especially
the
one
that
is
running
without
a
selective
acknowledgement,
but
it
happens
where
or
will
be
addressed
in
an
upcoming
revision.
Hopefully.
A
But
you
provided
the
comments
to
the
authors
already.
A
Okay,
if
that's
not
the
case,
then
I
would
say
the
next
focus
is
cubic
and
videos
up.
You
can
run
the
slides
on
your
own.
They
have
been
uploaded.
I
Thank
you,
so
I
actually
forgot
how
to
do
the
slice
thing
on.
A
The
on
the
below
your
name,
you
have
to
to
next
to
your
next
to
the
hand
icon.
There
is
a
document
icon
and
if
you
click
on
that
yep,
you
should.
J
J
I
It's
okay,
if
it's
okay,
if
you
want
to
take
over,
I
I
will
let
you
know
yeah.
Thank
you
very
much.
I'm
sorry!
I
couldn't
no
problem.
I
Okay!
Thank
you,
hi
everyone!
I
So
you
know
in
in
the
past
year
we
have
been
making
some
updates
to
cubic
and
we
are
trying
to
adapt
it
to
the
recent
based
on
the
recent
deployment
experience
in
in
tcp
and
quick
stacks,
and
a
lot
of
folks
have
contributed
to
this
effort
and
we're
really
grateful
for
that
next
slide.
Please.
I
So
I
think
around
two
months
ago,
when
you
know
the
chairs
were
concluding
the
work
group
last
call.
I
The
first
one
is
because
cubic
is
more
aggressive
than
reno,
so
we
have
mentioned
that
in
the
changes
we
have
added,
that
cubic
basically
updates
rfc
5681,
and
this
is
because
of
this,
the
the
beta
factor,
which
is
0.7
instead
of
0.5
next
point,
is
about.
When
we
have
a
slow
start,
overshoot
cubic
only
reduces
condition
window
to
70
percent
instead
of
50
in
case
of
reno,
and
this
means
that
cubic's
window
is
40
higher
than
reno.
So
this
could
cause
high
packet
loss
during
recovery,
but
could
take
multiple
rounds.
I
And
so
we
have
recommended
the
implementations
or
implementers
to
implement
high
start
plus
plus,
with
their
cubic
implementations
and
and.
J
I
The
experience
that
we
have
from
the
deployment
as
most
of
the
implementations
are
ready
to
that
and,
as
I
was
saying,
it
could
take
a
bit
more
than
one.
You
know
recovery
cycle
to
compensate
for
the
two
times
overshoot
in
the
slow
start.
We
are
advising
the
implementers
to
you
know,
evaluate
their
choice
of
the
first
multiplicative
decrease
after
you
know
whether
they
receive
a
packet
loss
or
they
receive
an
ec
yak.
So
this
is
something
they
should
be
evaluating,
and
we
have
mentioned
that
specifically
in
the
draft.
I
The
next
bullet
is
about
using
prr
to
reduce
the
sending
rate
slowly.
You
know
during
loss
recovery
and,
as
marco
pointed
out,
the
the
draft
you
know
when
he
reviewed
it.
It
incorrectly
mentioned
employing
a
fast
recovery
for
all
congestion
events,
so
this
should
only
be
done
for
packet
loss
and
not
for
easy
and
related
condition.
Events,
and
now
we
have
updated
that
last
point
on
the
slide,
similar
to
reno
the
performance
of
cubic.
You
know
as
a
loss-based
congestion
control.
I
I
So
these
are
also
some
important
design
changes.
So
cubic
has
you
know,
lower
window
reduction
and
it
also
reaches
w
max
faster
than
reno.
So
it
really
it
effectively
means
that
it
has
a
shallower
salty
than
reno,
and
we
have
added
some
guidance
regarding
you
know:
buffer
bloat
and
queue
management
where
the
new
aqms,
the
new
implementations
of
eqm's,
probably
could
mitigate
some
buffer
blood
by,
for
example,
setting
lower
thresholds
for
the
queue
size.
I
I
I
mean
we
do
need
a
window
of
two
packets
in
case
of
packet
loss
to
perform
loss
recovery,
but
for
ece
markings
we
need
to
keep
reducing
even
beyond
a
congestion
window
of
one
by
using
a
retransmit
timer
type
backs
off
exponentially.
So
I
think
this
is
some
something
that
implementations
need
to
be
doing.
I
don't
think
linux.
Does
this
or
other
implementations
take
care
of
this
behavior?
This
is
clearly
mentioned
in
the
roc
3168
and
I
think
we
should
implement
this.
I
I
They
all
use
flight
size
and
not
save
wind,
but
there
could
be
an
issue
of
there
could
be
an
issue
that
the
flight
size
sometimes
is
very
low,
especially
for
the
rate
limited
apps
when
the
loss
occurs.
So
the
best
way
to
mitigate
this
issue
is
to
implement
rxc7661.
I
I
These
are
some
of
the
editorial
changes.
There
was
a
bug
in
the
average
cubic
window
equation
that
neil
had
found
thanks
neil,
so
we
have
fixed
that.
The
next
point
is.
J
I
Rearranging
the
algorithms
for
spurious
events
detection,
we
now
recommend
the
standard
star
track
forward,
rto
recovery,
rfc
first,
and
then
we
provide
some
examples
or
references
for
experimental
rfcs.
I
Then
we
have
added
cubic's
response
to
certain
events
like
sudden
increase
or
decrease
in
capacity
and
cubic
is
more
aggressive
than
reno
in
both
the
cases.
Last
three
are
some
simple
editorial
changes
next
slide.
Please.
I
Thank
you.
So
this
is
an
open
issue
that
I
would
and
we
would
like
some
feedback
from
the
working
group.
We
have
some
feedback
already
on
the
github
issue,
but
if
you
have
not
had
a
chance
to
look,
I
can
describe
this
issue.
So
marco
has
raised
some
objections
about
the
spurious
detection
and
the
conditions
response
to
those
events
and
to
provide
some
context.
I
So
there
are
two
types
of
spurious
events,
at
least
for
tcp,
there's
rtos,
which
is
for
applicable
for
tcp,
and
there
are
spurious
retransmits
detected
by
acknowledgments
and
that's
applicable
for
both
tcp
and
quick.
So
the
first
issue
is
that
cubic
is
modifying
a
standards
track.
Rfc
4015,
which
you
know
defines
our
response
to
spurious
rtos,
the
rfc
port
event
4015.
I
It
sets
the
condition
window
to
you,
know
flight
size
plus
minimum
of
bisect
plus
initial
window
and
this,
but
in
cubic
we
are
recommending
to
restore
the
condition
window
to
its
previous
state
before
the
reduction
was
applied.
So
this
is
the
issue
I
will
come
back
to
the
solution
in
the
next
slide.
I
Second
issue
is
cubic,
is
currently
specifying
a
response
to
both
types
of
spurious
events
and
marco
said
in
smoke
in
certain
networks,
for
example,
in
mobile
networks,
a
path
change
could
increase
reordering
and
thus
it
can
result
in
spurious
fast
retransmit
and
at
the
same
time
the
bottleneck
capacity
might
get
reduced.
I
So
if
that
happens,
then
you
know
restoring
to
the
previous
condition
window
might
not
be
such
a
good
idea,
and
that's
why
he
said
there
is
no
previous
rfc
that
talks
about
restoring
state
due
to
spurious
fast
retransmit
events.
I
The
third
one
is
currently
we
don't
exclude
the
cases
when
the
ack,
detecting
spurs
event
has
ece
set
on
it.
So
this
is
a
simple
one.
I
think
we
can
fix
this
one
and
the
last
one
is
we
don't
specify
or
cubic,
doesn't
specify
any
mechanism
to
adjust,
rtos
acknowledgement
thresholds,
you
know
time
thresholds
etc.
J
I
So,
as
I
was
saying,
the
first
point
about
violating
the
standards
track,
rc
is.
I
As
the
response
defined
in
rfc
4015
is,
is
a
bit
conservative
because
the
condition
window
is
set
to
not
to
the
previous
value
but
to
flight
size
plus
a
minimum
of
bite,
sac
plus
initial
window,
most
likely
it
will
end
up
to
be
flight,
size
plus
initial
window,
in
my
opinion,
so
this
is
too
low
and
I
think
some
pro
folks
won't
be
happy
with
it.
I
So
if
you
have
an
opinion
about
this
and
how
do
we
proceed
or
deal
with
this
problem,
we
really
like
to
hear
back
from
you
second
one
spurious
factory
transmits,
so
these
are
much
more
common
for
tcp
and
these
are
the
only
events
more
quick.
So
I
think
if
there
is
no
previous
rfc,
that
provides
any
guidance.
I
Perhaps
we
can
record
cubic's
response
to
these
events.
And
again
this
is
the
another
topic
that
you
know.
Any
input
would
be
very
any
any
input
on
this
would
be
appreciated.
The
third
one
we
can
fix
in
the
draft.
That's
not
that's
an
easy
fix
and
for
the
last
point
I
think
a
lot
of
us
agreed
that
we
would
need
a
separate
draft
to
cover
this,
as
we
want
to
keep
the
loss
recovery
changes
separate
from
the
congestion
control
next
slide.
Please.
I
So
we
will
continue
to
fix
any
open
issues.
These
are
you
know
as
next
steps
and
once
whatever
issues
are
open
right
now
are
resolved,
we
will
again
request
the
chairs
to
conclude
the
work
group
last
call.
So
if
you
haven't
read
the
draft,
please
read
it:
if
you
plan
to
read
it,
we
would
like
to
take
care
of
the
issues
as
soon
as
possible.
Thank
you.
K
Hi
vinnie
good
job
presenting.
I
just
had
a
very
quick
comment
about
something
you
said
verbally.
I
don't
think
it
was
actually
on
the
slides.
You
talked
about
the
problems
with
wireless
networks
losing
packets.
K
The
bigger
problem
is
the
huge
variability,
because
wi-fi
can
vary
from
a
gigabit
or
more
to
a
megabit
per
second,
and
that
happens,
if
you
just
walk
around
the
house
with
your
phone
or
if
someone
else
walks
around
and
gets
between
your
computer
and
the
access
point,
so
I
think
the
bigger
problem-
and
I
don't
know
whether
the
draft
talks
about
this,
but
since
you
mentioned
it,
I
just
wanted
to
bring
it
up.
K
The
bigger
problem
is
that
if
your
wi-fi
rate
instantly
drops
from
500
megabits
to
50
megabits,
when
cubic
is
only
reducing
30
percent
each
time
it
takes
multiple
round
trips
to
bring
c
winds
down
by
a
factor
of
10
and,
of
course,
during
that
time,
it's
sending
too
fast.
The
queue
is
building
up,
so
the
round
trip
time
is
going
up.
So
the
time
for
the
round
trip
time
is
going
up.
K
G
L
So
I
have
a
question
so
as
we
make
these
changes
right,
some
of
these
changes
are
coming
in
from
review
feedback.
I'm
wondering
if
the
some
combination
of
all
the
diffs
from
the
previous
rfc
would
be
representative
of
a
implementation
that
has
implemented
all
of
these
changes.
M
L
Experience
of
the
entire
new
rfc
so
that
that's
just
one
concern
I
had
so
any
comments
on
that.
I
It's
a
great
point
and
the
changes
that
have
been
made.
I
You
know
before
this
review
were
mainly
to
reflect
what
the
implementations
are
doing
today
with
this
new
review
it
it's
it's
a
it's
being
a
little
more
careful,
I
would
say
a
little
bit
conservative
in
the
sense
and
we,
but
we
have
left
the
options
open
with
respect
to
the
things
that
implementations
are
doing
today.
So
it's
not
like.
We
are
completely
closed.
I
Those
gates,
for
example,
you
know
using
sieven
instead
of
light
size,
that's
one
of
sorry
using
flight
size
instead
of
seaweed,
that's
one
of
the
things,
but
we
have
still
left
options
for
implementations,
which
is
like
using
rfc
7661
and,
if
you're,
not
using
that
some
implementations
are
still
using
cvent
and
not
flight
size.
So
it's
not
like.
We
have
completely
changed
what
implementations
are
doing.
I
The
only
change
that
I
can
remember
which
implementations
are
not
doing
is
the
ecn
change
and,
I
think
that's
a
good
change,
because
ecnc
markings
are
representative
of
congestion
directly
and
we
should
keep
backing
off
so
so
yeah.
So
I
would
say
we're
not
too
going
too
far
away
or
too
too
way
off
from
what
implementations
are
doing,
so
it
should
be
okay,.
L
F
I
just
had
a
couple
quick
comments.
Oh
one,
I
wanted
to
encourage
the
draft
authors
to
to
keep
the
discussion
of
the
undo
logic
in
the
draft.
F
I
know
there
was
some
discussion
about
whether
to
keep
it
or
not,
but
I
wanted
to
emphasize
that
undoing
loss
recovery
events
is
really
important
for
performance
in
practice
in
the
real
world
and
the
cubic
algorithm
has
enough
state
variables
that
it's
non-trivial
to
know
exactly
what
variables
are
need
to
be
reverted
or
to
be
at
least
reminded
that
there
are
many
different
state
variables
and
not
just
see
when
then
access
thresh
to
undo
so
I
think,
that's
important
to
keep
in.
F
F
If
there's
been
reordering
due
to
a
path
change-
and
I
tried
to
respond
in
the
thread
but
wanted
to
emphasize
again
that
I
I
would
argue
strongly
that
it's
okay
to
go
ahead
and
undo
if
you
detect
that
there
was
reordering
and
this
various
retransmit,
because
if
there
was
actual
packet
loss
in
that
round,
then
the
loss
detection
machinery
will
detect
that
there
was
also
loss
and
will
then
you
know,
invoke
the
congestion,
control
response
and
slow
down
appropriately.
F
So
I
think
that's
fine
to
undo
in
that
kind
of
scenario
that
was
raised
and
yeah
and
then
also
just
to
echo
stewart
on
the
question
of
loss
in
wireless
networks.
F
I
definitely
agree
from
all
the
packet
traces
I've
seen
from
from
youtube
and
google.com
that
cellular
and
wi-fi
links
these
days
do
a
really
great
job
of
link,
layer,
re-transmissions
and
so
any
losses
that
are
are
there
are
usually
because
of
the
kind
of
scenario
that's
stuart
mentioned
or
there's
a
rate
reduction,
and
it
takes
a
couple
of
rounds
for
for
cubic
or
whatever
to
slow
down
to
match
the
new
delivery
rate.
So,
thank
you
for
all
your
nice
work
on
this.
This
draft.
A
N
Yeah
some
of
the
things
that
really
talked
about,
especially
towards
the
end
right
sort
of,
and
then
also
the
ravine
mentioned.
So
if
there's
this
document
started
out,
I
mean
the
the
pre-biz
version
started
out
with
willing
to
document
what
linux
is
doing
specifically
because
that
was
the
you
know.
Canonical
implementation
and
the
best
version
started
out
by
wanting
to
you
know
update
that,
because
the
links
have
changed
and
also
to
roll
in
some
of
the
things
that
other
implementations
of
cubic
had
in
the
meantime
done.
N
So
that
is
where
we
started
out
all
the
way
pretty
much
until
we
got
marco's
review
which,
as
we
said
right
raised
some
very
conservative
points.
That
basically
said
you
know,
you
can't
move
this
to
the
standard
strike
saying
this,
because
it's
in
conflict
with,
for
example,
five,
six,
eighteen
or
or
sk1
or
4015,
and
while
that's
you
know
technically
true
right,
we
shouldn't
publish
your
proposed
standard.
That's
goes
against
the
must
in
the
draft
standard.
N
In
reality,
right
all
the
stacks
have
been
doing
it
for
over
a
decade
and
the
internet
is
fine,
and
so
at
some
point
I
think
we
should
sort
of
give
up
this.
You
know
very
dogmatic
view
on
on
congestion
control
and
try
to
accept
that
things
have
moved
on
since
we
published
some
of
those
rfcs.
N
So
so
some
of
those
changes
we
made
was
because
we,
you
know,
couldn't
come
to
consensus
on
the
issues
on
it,
but
I
would
sort
of
really
hope
that
we
could
be
pragmatic
and
actually
you
know,
document
what
is
working
in
practice
rather
than
you
know,
find
ourselves
in
a
situation
where
unable
to
move
forward
because
of
a
must
that
was
written
15
years
ago.
O
Yeah,
I
think
I
want
to
echo
kind
of
wireless
point,
but
like
kind
of
go
in
a
slightly
different
direction
with
it,
which
is
I
I
think
we
should
really
be
conscientious.
I
mean,
I
think,
all
the
changes
like
have
been
discussed
sound
like
mostly
sensible,
although
the
flight
side
of
steven
change
is
a
little
concerning
to
me
because
in
practice
I
think
yeah,
I
don't
know,
I
think
that's
hard
to
make
right.
O
But
but
maybe
it
is
okay
with
enough
caveats,
but
but
I
think
we
should
be
really
conscientious
of
not
trying
to
put
things
in
a
document
that
no
one
has
implemented,
and
so,
if
we're
in
a
position
where
you
know
we
get
to
work
in
group
last
call
and
like
there's
some
recommendations
or
even
must
particularly,
but
even
some
recommendations
in
there
that,
like
zero
cubic
implementations,
actually
do
like.
I
think
we
need
to
like
remove
them
like.
O
I
think
we
need
to
be
like
really
conscious
of
like
not
putting
things
in
the
document
that
no
one
implements
like
in
production,
like
I
don't
mean
like
a
toy
implementation,
but
I
mean
like
a
major
deployed
operating
system
or
other
environment.
I
don't
know
I
mean,
maybe
I'm
too
like
harsh
on
this
but
like
if
no
one
implements
it.
There's
probably
a
reason
and
like
putting
this
back
is
very
silly
in
my
opinion,
but
that's
it.
A
Yeah
one
point
regarding
that.
So
regarding
the
implementation
point,
so
if
I
would
make
a
difference,
but
I
would
make
a
difference
between
what
major
implementations
explicitly
say
that
they
don't
want
to
do
compared
to,
they
might
want
to
do
it
or
they
will
do
it,
but
just
haven't
done
it
because
it
now
comes
up
and
it
might
make
sense
and
they
just
haven't
implemented
it
yet.
So
that
might
be
a
difference.
I
Well,
that
that
point
I'll
take
it,
but
you
know
the
flight
side
and
c
went
issue.
It
really
boiled
down
to
oh
there's,
no
roc
that
uses
that
and
we
are
moving
to
standard
tracks.
So
we
have
to,
I
don't
know,
follow
the
trend
and
I
was
a
bit
disappointed
with
that
and
and
like
really,
there
is
no
point
of
using
flight
size
if
at
all,
rfc
7661
is
probably
the
way
to
go
and
with
that
could
be
a
must.
I
So
I
am
not
very
familiar
with
all
the
rfc
procedures,
but
that
point
was
really
like
no
standard
strike.
Rfc
has
recommended
it
by
our
advice
cubic
doing
it
so.
O
Yeah
yeah,
sorry,
I
didn't
mean
to
latch
onto
that
one
in
particular.
Just
I
only
said
it
because
I'm
most
familiar
with
kind
of
the
ways
that
I
can
not
work,
but
I
think
I'm
receptive
to
the
idea
that
like
if
they
intend
to
implement
it,
that
might
be
okay,
but
I'm
also
apprehensive
because,
like
the
ietf
has
standardized
a
bunch
of
stuff
that,
like
no
one
implements
and
like
it,
my
experience
is
it
usually
works
out
pretty
poorly.
O
You
know-
and
this
is
just
the
nature
of
the
beast
like
if
you've
never
written
code,
for
it,
no
one's
ever
deployed
it
in
production.
I
mean
it
probably
works,
but,
like
you
just
don't
know
and
simulations
don't
really
cut
it.
So
I
don't
I
mean,
maybe
I'm
fine
to
drop
this
issue,
but
I
just
kind
of
want
to
put
my
two
cents
in
on
that.
At
that
point,.
A
N
Yeah,
so
as
a
little
preview
to
the
talk
later
on
updating
5681,
so
one
of
the
questions
I'm
actually
raising
is
we
if
we
should
try
and
roll
in
some
of
those
changes,
specifically
the
flight
size
versus
sealant
discussion.
We've
had
because
of
cubic
at
the
moment
the
cubic
draft
updates
5681
because
it
it
wants
to
make
it
okay
for
cubic
to
do
this,
which
raises
the
slightly
sort
of
interesting
question
whether
a
proposed
standard
can
update
a
draft
standard,
but
we
might
be
able
to
sidestep
that
issue.
A
I
have
a
final
question
to
for
the
author,
so
you
are
using
hearthstone
plus
plus
as
a
normative
reference.
I
I
I
think
it's
lars.
Could
you
answer
that
bro.
N
Yeah,
it
is
a
normative
reference
because
we
wanted
to
use
2119
language
to
say
should
so
we
are
basically
we're
basically
tying
ourselves
to
high
style
plus.
A
So
this
means
as
a
message
to
praveen.
The
document
needs
to
progress.
A
The
milestones
are
okay,
but
and
I'm
fine
with
him
saying
they
want
to
do
measurements,
but
so
he
should
be
aware
that
there
are
dependencies
yeah.
N
I
mean
this
is
yet
another
one
of
those
things
right
where
so
high
start
plus
plus,
is
the
only
standard
strike
document
we
have
in
this
space
and
since
we
can't
or
or
the
the
there's,
also
one
of
the
points
that
marco
raised
right.
We
here
is
that
we
shouldn't
recommend
something
that
isn't
standard
strike
in
a
standard
strike
document
and
high
strike.
N
Plus
it's
the
only
thing
that
we
intend
to
publish
on
standard
strike
right
so
so
either
we
say
you
know
you
know,
do
whatever
or
do
something
very
conservative
we
might
wanna.
You
know
do
that,
but
it's
again
one
of
these
things
where
in
in
reality,
people
do
do
things,
but
we
can't
say
that
in
the
document
because
of
stupid
silence,
reasons.
A
I
just
wanted
I
mean
I
just
wanted
to
so
I'm
happy
if
high
start
plus
plus
matches
around
the
next
ietf,
and
I
don't
think
it's
a
problem,
but
it
shouldn't
move
over
and
over
again
yeah.
If.
D
P
L
L
L
Fall
but
yeah,
I
think,
like
pretty
soon.
We
should
be
able
to
get
there
and
then
aim
to
have.
A
Perfect,
thank
you
neil
and
I'm
closing
the
queue
after
neil.
F
It
sounds
good
praveen.
Could
you
clarify
which
other
stacks
are
are
implementing?
I
start,
plus,
plus
and
and
you're
waiting
for
feedback
on.
L
Yeah,
so
the
cloudflare
quickstack
implements
it.
I
think
apple
was
experimenting
with
it
and
freebsd
as
well.
I
think
netflix
was
experimenting
with
it.
So
there's
multiple
implementations
out
there,
I'm
just
waiting
for
more.
A
A
So
martin
states
that
he's
a
bit
concerned
about
the
post-working
group
last
call
changes,
don't
actually
have
consensus,
so
we
will
once
once
we
see
the
final
document.
I
think
we
will.
A
Then
I
think
we
move
on
to
the
next
presentation:
tcp
yang
model.
I
think
it's
michael.
B
Yeah,
this
is
michael
speaking
from
the
floor.
I
guess
you
can
hear
me
right
okay,
so
this
is
a
short
update
on
the
tcpa
module
other
that
is
trying
to
work
with
mahesh
and
michelle,
and
there
have
been
some
recent
updates
to
this
document,
mostly
to
ensure
that
we
get
closer
to
a
working
group
last
call
to
finish
the
small
piece
of
work,
so
there
have
been
two
updates
recently.
B
B
Expanded
on
the
specific
game,
semantics
on
the
tcp
connection
table,
which
is
writable,
so
this
has
caused
some
discussions
in
the
past.
So
we
added
text
on
that
and
there
were
some
open
issues
in
the
security
considerations
which
got
fixed,
so
the
o3
document
was
probably
first
complete
version
of
the
document,
and
on
that
version
we
then
got
two
extensive
sets
of
video
comments
both
from
tom
catch,
the
first
email,
the
relatively
small
minor
edits,
which
have
been
fixed
in
o4.
B
B
So
this
is
basically
the
current
version
of
four
and
then
we
basically
received
the
second
set
of
extensive
review
comments
actually,
which
are
summarized
on
slide
number
four.
So
these
are
a
lot
of
good
and
valid
comments.
Most
of
them
are
relatively
straightforward,
so
I
don't
think
there's
much
discussion
needed
on
it.
We
just
have
to
implement
the
changes,
so
I've
listed
here
that
there
are
probably
missing
references
and
some
of
the
references
need
to
be
more
prominently,
referenced
in
the
yang
module
and
a
couple
of
other
things.
B
So,
as
I
said,
the
references
need
to
improve.
There
are
also
some
small
changes
in
the
in
the
yang
module
that
need
to
improve.
Some
is
just
yang
semantics.
B
A
more
editorial
aspect
is
that
the
suggestion
is
to
add
more
warning
signs
regarding
md5,
so
so
that
is,
this
is
a
legacy.
Technology
and
tcpo
is
much
better
than
that.
So
we
have
such
warning
signs
already
in
the
document,
but
we
could
do
so
more
prominently
and
that's
something
that's
easy
to
change,
and
the
examples
are
also
not
perfect,
so
these
are
all
relatively
minor
things
that
will
be
fixed
in
the
next
version
of
five.
Hopefully,
within
the
next
few
weeks
there
are
only
two.
B
The
things
that
are
are
the
bigger
questions
actually,
and
the
first
issue
is
that
there
is
an
ongoing
evaluation
in
the
ops
area
working
group
on
a
couple
of
documents,
actually
that
work
on
service
models
and
one
of
these
models,
specifically
the
three
network
model,
also
includes
yank
statements
on
tcp
ao
parameters,
simply
because
the
layers
vpn
site
may
need
to
implement
tcpo
to
protect
the
control
plane
signaling
of
the
pgp
specifically,
and
that
creates
an
overlap
between
these
two
young
modules.
B
The
same
overlap
happens
between
the
l3
and
model
and
a
lot
of
other
itf
models
simply
because
it
uses
other
a
lot
of
other
technologies,
but
the
overlap
with
the
tcp
mdn
module
was
not
well
mentioned
in
the
in
our
draft
so
far,
and
this
is
why
we
have
to
do
something
about
that.
There
is
also
a
technical
difference
in
the
modeling
assumption.
The
l3
and
m
model
assumes
that
some
of
the
tcpao
parameters
are
modeled
in
the
kitchen
k
gene
key
chain.
B
Sorry,
the
only
issue
is
that
the
corresponding
rc,
which
is
8177,
doesn't
foresee
that
as
of
today.
So
that
is
why
we
actually
don't
have
that
solution
as
of
today
and
in
the
tcpm
yang.
What
will
be
there?
Therefore,
I
have
explicitly
added
configuration
parameters
for
the
send
id
and
the
receive
id.
We
will
see
later
why
these
parameters
are
so
relevant
for
tcpo.
B
So
this
is
a
difference
at
the
moment.
My
suggestion
is
to
clearly
explain
that
a
difference
and
of
the
reason
why
the
this
model
models
those
two
parameters
explicitly
and,
of
course,
this
also
implies
that
we
have
to
add
a
reference
to
the
3m
document,
which
is
still
missing
in
o4.
So
this
is
the
first
possibly
a
little
bit
bigger
change,
and
that
requires
additional
text.
At
least
the
last
bigger
request
for
change
was
comprehensive
comparison
with
the
tcp
mip.
B
So
we
got
past
feedback
in
tcpm
that
the
tcp
emip
is
not
so
interesting
for
this
young
module,
but
now
we
have
a
request
to
make
a
comparison.
So
this
is
something
that's
not
super
hard,
because
we
still
model
a
lot.
What
is
in
the
tcp?
Mip
already
with
some
slight
differences
to
be
future
proof,
so
we
could
add
such
a
comparison
of,
for
example,
in
a
new
appendix.
B
So
it's
something
that's
doable,
but
we
haven't
done
so
in
the
past
so
and
this
is
something
where
I.
B
The
community,
if
this
is
something
considered
useful,
as
I
said
there
have
been
past
statements
on
the
mic
on
the
tcp
map.
D
E
B
Any
comments
from
the
group
would
be
particularly
appreciated
and
that's
the
current
status
so,
as
I
said,
there's
a
new
version,
a
new
version
coming
up
that
should
address,
or
hopefully,
all
known
issues,
and
we
would
really
look
for
further
comments
from
the
group.
Q
Michael,
so
what
is
the?
What
is
the
reasoning
behind
wanting
to
like?
Why
did
the
commenter
want
to
do
the
bib
comparison.
B
I
mean
it
was
just
I
mean
the
I
guess.
The
underlying
assumption
is
that
there
are
quite
a
bit
of
implementations
out
there
of
the
t3
mip,
so
we
know
that
there
are
implementations
of
the
mip
and
if
somebody
who
implements
the
mip
wants
to
transition
to
this
yang
module,
of
course,
maybe
a
translation
table
could
help
in
most
cases.
To
be
honest,
it's
relatively
easy
to
see
the
mapping.
The
statistics
are
very
similar.
We
have
removed,
I
think
one
or
two
aspects
from
the
mip.
B
We
could
reason
why
we've
removed
that,
although
there
was
a
in
the
middle,
there
is
a
setting
for
the
tcp
retransmission
calculation
timeout,
so
we've
removed
that
one.
I
thought
that
this
one
is
widely
implemented.
Actually.
B
Q
Okay,
well
I
mean
I,
if
you
think
it's
not
a
lot
of
work.
I
I
certainly
wouldn't
object
to
like
an
appendix,
but
I
just
I'm
a
little
skeptical
that
yang's
gonna
become
a
dominant
configuration
mode
for
like
tcpn
points,
barring
the
special
case
which
you
correctly
targeted,
which
is
like
these
bgp
hosts.
So
I
I
don't
think
it's
him.
B
Q
B
B
I
mean
that
that
comment
is
not
the
first
time,
so
we
all
know
that
the
angle,
not
the
the
most
important
configuration
method
for
a
lot
of
tcp
stacks,
but
it
matters
on
vouchers.
B
A
Okay?
Thank
you.
The
next
is
a
presentation
on
about
test
vector
the
status
of
the
draft,
so
from
my
hattie
or
I
don't
know
how
that
thing
plants
this
up.
G
I
guess
you
can
see
the
slice
right,
okay,
great,
excellent.
Sorry,
I
had
some
technical
difficulties,
my
laptop
just
crashed,
so
I
had
to
change
to
the
spare
one,
but
there
we
go.
G
G
Let's
see
the
next
slide,
so
the
reasoning
here
is
that
implementing
tcpa
is
is
quite
complex,
so
the
test
vector
should
help
to
implement
it
correctly
and,
of
course,
for
interoperability.
G
G
This
was
presented
last
time,
iitf
110,
not
many
changes
shortly
after
itf
110,
we
were
able
to
test
it
in
our
lab
with
ipv6,
so
no
changes
there.
Everything
was
working
fine.
G
So
there's
some
clarifications:
the
nut
travels
or
was
changed
middle
pro
middle
box
traversal
and
also
clarified
some
parameters
that
which
one
where
decimal
or
hexadecimal
or
even
binary
for
the
dcs
fields.
So
these
are
the
changes.
G
So
thanks
for
the
feedback
so
far,
of
course,
the
feedback
is
always
welcome.
G
E
B
Yeah,
this
is
michael
speaking,
also
just
as
a
heads-up.
This
is
an
informational
document,
so
of
the
powerful
information
documents
is
maybe
a
little
bit
lower
than
standard
strap
so
for
an
informational
document.
I
think
that
the
working
group
last
call
is
easier
to
achieve
and
we
might
not
be
far
away
from
that.
A
If
that's
not
the
case,
then
we
move
on
to
greg
who
is
presenting
on
a
tcpao
interrupt.
D
Oh
wait
a
minute.
J
Okay
hi,
my
name
is
greg
hankins,
I'm
presenting
on
behalf
of
me
and
also
melchior
ailments
who
participated
and
helped
facilitate
this
interrupt
test.
I
should
mention
it's
not
clear
from
these
slides.
The
main
use
case
that
we're
looking
at
in
the
operator
community
is
for
bgp,
so
this
tests
focus
around
pcpao
with
bgp
and
all
the
work
that
we're
doing
to
evangelize
the
technology
and
promote
it
in
the
operator
community
is
around
use
for
bgp
as
a
replacement
for
md5.
J
We
conducted
an
interop
test
last
summer.
It
was
in
the
middle
of
the
covid
pandemic
and
the
shutdowns.
So
we
figured
let's
use
the
time
to
do
some
interrupt
testing
over
the
internet.
It
was
done
remotely
using
virtual
machines
and
it
turns
out
it
doesn't
really
matter.
J
J
The
interrupt
was
pretty
easy,
but
we
did
learn
a
couple
significant
lessons
that
are
here
in
the
slides
lesson
number
one-
and
this
is
maybe
was
not
obvious
to
me
as
a
newcomer
to
tcp
ao.
I
think
for
someone-
that's
configuring
it.
For
the
first
time
it
could
be
a
little
bit
of
a
learning
curve.
So
the
first
lesson
that
we
learned.
J
J
That
was
one
important
thing
that
we've
learned.
We
provided
this
feedback
to
the
yang
model
to
hopefully
clarify
that
in
the
yang
model,
and
we
also
updated
our
respective
documentation
to
make
sure
that
people
understand.
That's
how
you
configure
it,
and
the
other
thing
is
that
since
tcpao
supports
multiple
authentication,
algorithms,
then
or
cryptographic
algorithms,
I
mean,
then
you
have
to
make
sure
that
you
also
configure
the
same
algorithm
or
else
that's
not
going
to
work.
J
The
other
lesson
that
we
learned
is
that
we
had
a
firewall
in
the
path
which
we
initially
did
not
know
about.
So
the
nokia
router
was
connected
to
the
internet
and
was
was
open.
The
juniper
router
was
behind
a
firewall
and
it
turns
out
that
the
firewall
was
actually
modifying
the
mss
option
for
some
reason.
J
J
There
wasn't
really
much
else
to
say
really
once
the
the
tcp
sessions
came
up,
we
exchanged
routes
and
just
as
an
extended
test,
and
everything
worked
great.
So
in
terms
of
implementations,
there
are
currently
several
commercial
implementations
that
we're
aware
of.
If
anyone
knows
of
other
implementations,
please
let
us
know
and
we'll
add
them
to
the
list.
J
There
are
no
open
source
implementations.
This
is
a
problem
because
the
community
can't
really
adopt
the
technology
until
there's
a
number
one
open
source,
implementations
and
then
number
two,
a
tool
ecosystem
to
support
the
implementation.
So
currently
there's
no
open
source
implementations
and,
as
far
as
we
know,
only
wireshark
supports
tcpao.
J
Other
tools
like
tcp
dump,
don't
have
support
for
ao,
yet
so
here's
the
exciting
news
is
that,
just
last
week
several
members
of
the
community
finalized
a
development
project,
that's
funded
by
the
ripe
ncc.
We
have
funding
to
develop
a
reference
implementation.
It's
done
by
philip
peps
who's,
been
a
longtime
freebsd
committer.
J
We're
going
to
provide
support
by
offering
router
vm's
for
interrupt
testing,
juniper
and
nokia
will
support
the
testing
with
vms
and
the
set
of
deliverables
is,
first
of
all,
a
reference
implementation
of
tcpao
for
the
bsd
kernel
and
also
nc
netcat
utility,
and
then
a
port
of
that
reference.
Implementation
to
the
linux
kernel
we're
currently
working
on
more
sponsors
to
also
add
extensions
to
some
popular
routing
implementations
like
open,
bgp,
bird,
possibly
fr,
and
this
work
is
targeted.
J
J
J
Why?
You
should
use
this
technology
central
location
for
links
to
implementations,
configuration
all
that
kind
of
thing,
everyone
everything
that
an
operator
would
need
in
order
to
figure
out
if
their
routing
implementation
support
tcpao,
how
to
configure
it
and
as
well
as
some
of
the
benefits,
as
I
mentioned,
and
that's
about
it,
any
questions
or
if
you
have
any
comments
or
any
other
tools
and
resources
or
if
you'd
like
to
be
involved
in
some
of
the
ongoing
projects,
then
get
in
touch
with
us.
F
You
mentioned
are
the
is
the
is
your
team
aware
of
the
the
well.
It
looks
like
a
tcp
ao
implementation
underway
by
leonard
crestaz
for
linux.
F
No,
we
did
not
know
about
that
yeah.
It
does
seem
like
over
the
past.
I
guess
over
the
past
couple
months.
I
do
see
patches
from
leonard
prestas
for
what
looks
like
tcpao.
F
C
A
B
J
No,
we
didn't
primarily
just
because
we
we
each
had
implementations
and
we
didn't
feel
like
we
needed
to
and
our
actually
the
nokia
implementation
was
released
many
years
ago,
even
in
the
cisco,
as
well.
Even
before
the
test.
Vector's
draft
was
even
written,
so
that
wasn't
useful
for
us
at
the
time.
But
I
expect
it
will
be
useful
for
the
ongoing
work
in
the
freebsd
reference
implementation.
B
Yeah,
if,
as
if
there's
any,
if
such
as
an
observation
already
can
be
posted
somewhere,
for
example,
a
list
that
this
would
be
excellent,
so
if
yeah
somebody
confirms
that
the
test
vector
document
first,
this
correct
and
segment
has
been
useful.
So
this
would
be
a
very
strong
help
for
tcpm
to
finish
the
document
actually
yep.
A
J
E
A
A
Neither
wes
nor
joe,
but
wes,
is
in
his
west.
Still
there.
D
A
So
my
point
is:
there
are
two
documents,
so
so
maybe
you
can
jump
in.
There
are
two
documents.
One
is
describing
how
to
extend
the
option
space
for
packets
after
the
initial
exchange
with
soon
segments
and
another
one
which
is
an
individual
submission
about
exchange,
extending
the
option
space
for
packets
with
this
option
set,
and
so
he
has
made
no
progress
on
the
implementation
and
things
so
things
that
the
document
is
becoming
more
stable.
A
C
A
little
bit
I
that
is
indeed
quite
stable
and
has
been
for
a
while.
It's
just
been
a
case
of
there's
a
lack
of
eagerness
or
urgency
in
implementing
it
anywhere,
and
so
that's
what
has
been
paused
on.
A
E
E
Some
middle
box,
interaction
or
a
hardware,
optimization
interaction
of
the
edo
and
then
especially,
if
it's
in,
if
someone
you
know
implementing,
you
know,
renaissance
others,
you
know
always
stuck
you
know,
then
I've
tried
to
get
some
feedback.
If
this
is
feasible
like
this,
there
is
no
concerns
about
it,
so
I
just
know
my
concise
now.
This
is
very
best
feedback
on
this
document.
E
So
but
this
is
kind
of
important
document,
because
you
know
option
space
extension
is
very
important,
so
I
just
would
like
to
see
more
feedback
and
then
and
also
scene
extension.
I
I
have
some.
You
know
one
collection
this
this
is
not
indeed
is.
This
is
not
adapted
working
group
item,
so
there
is
no
working
with
brasco,
and
so
it's
not
ready
for
working
with
rascal,
I
think
and
then
and
then
I
basically
opposed
the
idea.
That's
submitting
this
draft
as
an
individual
submission
individual
streams.
E
A
Yeah,
the
milestones
are
a
bit
in
the
future
martin
as
an
individual
or
as
an
id.
I
don't
know.
Q
Oh
as
an
individual
thanks
for
the
presentation
wes,
can
you
talk
a
little
more
about
implementation?
So
are
there
literally
no
implementations
or
just
a
matter
that
hasn't
been
deployed
anymore.
C
Actually,
I
believe
there
was
some
work,
but
it
was
several
years
ago.
I
think
joe
was
working
with
a
grad
student
who
did
some
implementation
work
on
edo?
I
don't
think
the
other
one
was
implemented
yet,
but
yeah.
I
think
there
was
some
work
and
that
experience
was
plowed
back
into
the
draft.
However,
it's
not
the
case
that
that
work
resulted
in
like
patches
to
the
linux
kernel
that
are
deployed,
or
anything
like
that.
C
So
I
think
an
interesting
way
to
deal
with
this
might
be
to
attempt
to
last
call
and
see
who
responds.
I.
I
think
I
think
that
might
force
people
to
look
at
it,
because
there's
really
no
driving
need
for
this
at
the
moment,
although
there
had
been
many
times
in
the
past
when
people
wanted
more
options,
space
and
didn't
have
it,
and
because
of
that,
that's
why
these
drafts
exist.
Q
Yeah,
I
mean,
I
think
I
think
last
call
as
a
technique
to
like
shake
out
stuff
is
good.
As
long
as
you
only
accept
knows
an
answer,
I
I
guess
my
other.
My
other
thing
would
be
I'm
a
little
uncomfortable.
The
idea
of
edo
being
standards
track.
It
seems
to
me,
like
the
whole
game
here
is,
is
deployment
experience
with
the
the
whole
question
here?
Is
the
middle
box
reactions
and
without
any
deployment
experience
I
don't
know
how
we
could
possibly
recommend
this
to
people.
Q
B
Yeah
just
like
to
echo
what
wes
said
10
years
ago,
this
was
really
an
urgent
problem
that
showed
up
every
once
per
year,
and
this
was
the
basically
when
we
started
that
work.
Of
course
things
got
more
silent
on
options
recently,
but,
as
I
said,
there
was
a
strong
need
for
that
document.
B
A
A
But
yoshi
said
he
had
comments,
so
I
suggest
that
he
sends
the
comments
and
that
can
get
incorporated
in
the
document.
A
Okay,
so
we
have
40
minutes
left
and
we
have.
We
need
40
minutes
for
the
for
the
next
three
presentations.
So
I'm
I'm
running
the
timer
to
keep
the
time
budgets
reflected
so
lars
you're
up
next.
N
Yeah
thanks
can
you
share?
Should
I
share.
N
I
think
it's
these
ones.
Can
you
see
yes
and
sorry
for
being
so
super
late
with
them?
I
actually
didn't
have
them
until
earlier
today,
because
I
thought
I
would
speak
without
slides,
and
then
I
decided
maybe
giving
people
something
to
look
at
other
than
myself
is
a
nice
thing
to
do
right
so.
N
Which
is
what
I
did,
because
I
wanted
to
try
it
out,
but
it's
a
new
functionality
that
can
be
advertised
a
bit
more,
but
it
means
that
your
shares
don't
have
to
like
do
all
the
slight
handling
yourself
anymore,
which
is
nice
anyway.
So
it
was.
I
guess
it's
good.
N
I
made
slides
just
just
for
that
reason
right
so
so
this
is
a
sort
of
initial
presentation
on
wanting
to
raise
the
question
whether
we
should
revise
rfc
5681,
which
is
tcp
congestion
control,
which
is
a
pretty
core
rfc,
and
I
already
talked
earlier
when
we
had
the
cubic
presentation
on
some
of
the
reasons
why
I
think
wouldn't
mean
we
might
want
to
do
that.
But
let's
go
ahead
as
a
brief
history
right.
N
So
5681
is
the
current
draft
standard
it
obsoleted
2581,
which
was
a
proposed
standard
which
in
turn
obsoleted
2001,
which
was
also
proposed
standard
and
just
based
on
the
rfc
number
alone.
You
can
see
it's
been
a
while,
since
this
was
last
touched.
N
N
There's
one
existing
errata,
which
has
also
been
verified
on
5681,
so
the
minimum
goal
would
be
to
roll
that
into
the
revision,
which
is,
I
think,
pretty
uncontroversial,
given
that
it's
already
verified
I
reached
out
to
the
original
authors
and
ron
paxton
has
said
that
he
has
moved
on
and
has
no
time
to
spend
on
this,
but
wishes
of
luck
mike
allman
and
ethan
blanton
said
they
would
contribute,
and
I've
also
sort
of
reached
out
to
ridicule
neil
cargill
and
jana
yengar
for
help,
and
that
is
already
five.
N
So
if,
if
those
five
want
to
do
it,
you
know
full
speed
ahead
to
them.
Otherwise
I
can
sort
of
volunteer
to
act
as
editor,
but
I
felt
like
I
wanted
to
have
some
people
on
board
that
know
this
stuff
better
than
me.
N
There's
a
repository
on
github,
which
basically
has
a
markdown
version
of
the
text
that
was
5681
and
there's
sort
of
just
formatting
changes
and,
and
some
very
minor
things
are
related
to
that,
like
updating
to
reference
and
so
on.
So
the
the
one
question
I
want
to
raise
is:
should
tcpm
even
do
this
right,
basically
open
5681
with
those
goals
on
the
slide
and
and
there's
a
another
goal
on
the
next
slide.
N
That
might
make
that
discussion
a
little
bit
more
interesting
right,
and
that
is
there's
things
that
have
accumulated
since
we
published
5681,
specifically
they're
widely
deployed
for
for
a
decade
or
longer
that
are
in
conflict
with
what
5681
says
and
one
specific
example
came
up
when
we
did
cubic
the
quickbiz
right.
5681
says
you
set
ss
thresh
based
on
flight
size
and
cubic,
as
far
as
I
know,
cubic
basically
since
its
inception,
and
also
all
major
cubic
implementations.
N
Don't
do
that
and
they
update
ss3
based
on
seawind
and
not
flight
test,
because
it
has
benefits
in
app
limited
scenarios
which
is
technically
not
allowed
by
5681
right,
which
is
why
it
was
raised
for
cubic
when
we
wanted
to
move
it
to
the
standard
stack
gory
pointed
out,
there's
rfc
7661,
which
is
experimental
that
already
sort
of
allows
ss
thresh
to
be
set
to
something
between
flight
size
and
c
wind,
so
so
cubic
would
be
sort
of.
N
In
line
with
7661
but
7661
didn't
update
5681
and
it
couldn't
because
it's
experimental
right,
also
a
quick.
I
think
this
was
brought
by
yusun.
The
quick
congestion
control
document
actually
defines
updating
as
a
thresh
based
on
seaweed,
and
so
is
also
not
in
line
with
5681,
and
we
have
sort
of.
This
is
one
example
of
something
where
you
know
we
are
sort
of
diverging
from
the
the
the
original
specification
in
5681,
but
there's
probably
others
like
that,
and
others
might
have
other
examples.
N
So
the
question
is:
if
we
did
update
5681,
did
we
want
to
move
some
of
these
things
in
specifically
for
this
one?
The
cubic
bis
document
that
we're
doing
is
already
updating
or
proposing
to
update
5681,
specifically
to
make
cubic's
behavior.
Okay,
basically
saying
you
know
whatever
cubic
does
is
fine,
too,
and-
and
there
might
be
other
things
here
that
we
might
want
to
sort
of
make
okay
based
on
5681.
N
This
does
raise
the
question
sort
of
in
terms
of
sort
of
the
the
standards
process.
You
know
if
we
move
things
into
5681
that
aren't
baked
enough.
N
You
know,
would
that
mean
we
need
to
go
back
to
propose
standard
with
it
or
or
stay
at
draft
center
and
not
go
to
full
internet
center,
but
I
think
that
question
of
how
we're
going
to
do
this,
we
can
answer
once
we
know
what
we
want
to
do
to
the
to
the
specification,
and
I
see
I
have
people
in
the
queue
which
is
good
because
that's
all
I
had
thank
you.
R
Yeah,
hello,
everybody
thanks
for
bringing
this
up,
so
I
think
actually,
these
changes
on
this
slide
are
more
important
than
moving
to
internet
standards.
So
I
think
we
should
do
this
for
having
these
changes.
If
we
go
for
internet
standard
afterwards
or
not
it's
a
different
question
for
me,
we
might
still
be
able
to
do
it
because
it's
based
on
what's
actually
implemented
out
there,
but
that's
the
last
important
point
for
me
and
I
also
want
to
mention
another
thing.
R
Q
So,
as
an
individual,
I
I
support
this
work.
I
I
am
someone
inclined
to
just
roll
in
the
abc
abyss
into
3465
as
well,
but
I
know
that's
the
next
talk
so
I'll
just
leave
it
at
that,
but
the
regarding,
like
the
standards
level,
I
don't
have
a
really
strong
opinion
about
it,
but
I
I
do
think
we
need
to
think
hard
about,
like
maybe
reframing
the
language,
to
be
a
little
more
permissive
in
terms
of
experiments
and
so
on.
Q
If
the
internet
standard,
like
it's
gonna,
be
very,
very
hard
for
documents
to
modify
it
and
so
like,
I
would
love
to
reposition
it
to
say
you
know
to
talk
about
congestion,
control,
experiments
to
explicitly
acknowledge
the
existence
of
of
things
that
we
know
about
like
cubic,
which
I
think
you're
doing
here
and-
and
you
know
just
I
I
just
I
just
don't-
want
to
have
like
further
tinkering
blocked
by
you-
can't
modify.
Q
You
have
a
proposed
standard
and
you
can't
modify
this
internet
standard
and
just
have
that
be
a
line
of
objection
to
further
experimentation.
Thanks.
L
Internet
standard
congestion
control
is
still
evolving.
We're
still
finding
new
ways
of
doing
things,
so
you
know
not
baking
in
would
be
a
better
thing
to
do,
but
this
makes
sense
to
me.
I
had
one
question
so
apart
from
this
particular
ss
thresh
issue,
what
are
the
other
changes
that
we're
thinking
about?
Are
there
like
other
issues
or
updates
that
we
want
to
do
here.
N
I
I
don't
know
frankly,
so
I
haven't
had
a
chance
to
sort
of
get
together
with
the
the
people
I
I
sort
of.
I
got
to
the
point
where
I
asked
them
whether
they're
interested,
but
I
haven't
actually
sent
an
email.
So
what
do
we
want
to
do?
I
think
I
I
basically
when
I
contacted
them
the
the
goal
I
pitched
was
to
move
this
to
another
standard,
and
so
by
definition
we
thought
this
was
going
to
be
a
pretty
minor
update.
N
If
we
think
we
want
to
make
more
changes,
then
you
know
we
sort
of
need
to
reconfirm
and
restate
this,
but
I
don't
know
so
I'm
I'm
guessing.
We
want
to
sort
of
maybe
start
a
list
of
some
sort.
I'm
hoping
it
will
be
still
a
short
list,
but
this
is
the
one
thing
that
I
have
fresh
in
my
mind
because
of
cubic.
A
P
P
P
C
M
You're,
very
okay,
the
the
the
changes.
M
Moving
to
internet
standard,
I
think,
would
be
wrong,
you
know
yep
and
then
and
the
the
reason
is
that
I
think
there's
reducing
use
of
reality.
B
Yeah
just
have
a
quick
comment
from
the
chair:
that's
there's
also
something
that
might
have
editorial
changes.
793
bliss
has
a
congestion,
con
control,
consideration
section
and
that
changes
the
framing
for
this
5681
a
bit
so
because
the
basic
construction
control
requirements
in
793
bis
and
as
of
today,
this
was
basically
56.81.
B
So,
as
I
said,
they
changes
in
793
bits
actually
could
result
in
editorial
changes
in
this
one
as
well.
A
Okay,
so
I
would
say
we
get
this
discussion
to
the
list
since
we
have
to
move
on
so
the
next
one
is
about
abc
vidi.
Should
I
run
the
slides
again.
I
I
can
try
oh.
A
You
can
do
it
if
you
want
okay
here
we
go
okay
and
you
have
15
minutes,
but
if
you
can
do
it
in
less
time,
I
can.
A
G
I
Oh
nice,
so
this
this
is
a
discussion
on
the
mailing
list
about
you
know
how
abc
is
currently
right
now
and
youtube
neil,
and
I
you
know
we
talked
about
it
and
probably
we
want
to
bring
our
proposal
to
the
to
this
meeting
and
to
the
mailing
list.
So,
let's
see
so
the
recap
of
rc
3465
appropriate
by
counting.
I
There
is
a
you
know
in
earlier.
In
I
don't
know,
I
wasn't
really
working
on
congestion
control
in
1999,
but
there
was
a
paper
in
1999
and
it
was
about
how
you
know
the
receivers
could
manipulate
senders
and
can
manipulate
the
senders
to
increase
the
condition
window
by
sending
acknowledgements
for
one
bite
instead
of
for
the
whole
packet,
and
this
means
that
you
would
increase
the
condition
window
by
a
thousand
packets,
but
the
receiver
could
basically
is.
I
It
could
basically
be
just
acting
thousand
bytes,
and
so
the
rfc
3465
fixes
this
by
making
the
increase
based
on
the
bytes
acknowledged,
not
per
ack,
and
the
issue
that
is
discussed
in
rc
3465
is
about
a
stretch
stack
which
could
sharply
increase
congestion
window
and
cause
very
bursty
behavior,
and
for
that
it
has
a
limit.
A
cap
called
l
to
limit
the
bursting
behavior.
So
this
is
the
recap
of
34.65
on
the
next
slide.
I
We
are
talking
about
the
problem
statement,
so
the
the
with
with
you
know
so
much
involvement
and
advancements.
The
cap
of
two
packets
is
not
cutting
it,
so
we
think
that
the
these
days,
the
stretch
acts
acknowledge
a
lot
more
significantly
more
packets
than
two
packets,
and
this
means
that
the
condition
window
increase
is
going
to
be
limited
by
two
packets
and
it's
going
to
be
very
slow.
The
increase
is
going
to
be
very
small
and
other
stacks
don't
implement
this
limit.
I
Because
of
you
know
the
low
throughput
and
the
low
receiving
increase
issue
and
linux
has
been
implementing
abc
without
l
since
2013,
and
it
it
basically
the
the
the
way
to
control
bursting.
It
encourages
to
use
spacing
to
reduce
the
bursts.
I
The
solution
we're
tr
we're
trying
to
propose
a
solution
where
you
know
we
remove
this
limit
l
and
we
say
that
the
increase
in
the
conduction
window
is
a
separate
problem
from
from
bursting.
So
we
use
the
ack
information,
the
total
amount
of
bytes
acknowledged
to
increase
the
condition
window
and
basically
that's
based
on
what
we
are
learning
about
the
network
and
what
we
are
probing
and
we
will
separate
out
the
bursting
issue
by
using
some
sort
of
pacing,
the
rc,
the
pr
rfc
6937.
I
I
So
it's
it's
it's
in
you
know
in
in.
In
just
plain
english,
it
is
the
amount
of
data
that
has
left
the
network
and,
let's
say
if
zach
is
supported
in
in
that
case,
when
we
don't
have
any
sac
blocks,
then
delver
data
is
basically
changing
the
the
old
una
and
the
new
una
and
when
sac
is
supported.
I'm
sorry
when
the
sorry
when,
when
there
are
sac
ranges,
which
means
there
are
some
reordering
or
there's
some
packets
are
getting
acknowledged
before
the
send
una
is
increasing.
I
When
sac
is
not
supported,
then
first
case
is
when
there
are
no
duplicate
x.
It's
just
same
as
the
previous
case,
where
it
will
be
change
in
the
same
q
a
and
when
there
is
some
recovery
going
on
and
we
have
duplicate
acts
then
delivered
data
will
be
equal
to
one
packet
on
every
single
dupac,
because
one
packet
is
leaving
the
network
and
on
subsequent
partial
or
full
ack.
I
It
will
be
updated
to
whatever
is
the
change
in
the
send
una
and
we
will
subtract
one
sms
for
one
packet
for
each
preceding
dupac,
so
you
know
so
that
we
don't
double
count
the
packets
that
are
cumulatively
acknowledged
in
the
full
act.
I
So
this
this
is
a
simple
definition
of
delivered
data
and
it
reflects
exactly
what
is
leaving
the
network
at
any
with
with
any
act.
We
know
how
much
data
has
left
on
our
work,
and
this
is
the
last
slide
this
is
about.
This
is
a
simple
change.
Probably,
and
can
we
fold
these
changes
into
56
81
bits
instead
of
doing
a
3465
base,
and
that
way
we
can
obsolete,
probably
the
3465
roc.
I
That's
all.
I
have
open
four
questions.
Q
So,
as
I
said
a
minute
ago,
yeah,
I
think
the
answer
to
the
last
question
is
yes.
5681
already
has
abc,
but
with
l
equals
one.
If
I
characterize
it
that
way,
so
I
think
just
like
having
fewer
documents
would
be
good.
The
other.
The
other
caution
I
would
have
is
to
email
mark
allman.
Q
Q
A
M
L
Caps
it
to
eight
right
now,
so
that
change
was
done,
particularly
as
stretch
checking
became
more
common
and
windows.
Tcp
stack
doesn't
do
facing
either
right
now
by
default,
and
I
believe
linux
doesn't
do
pacing
either
by
default.
So
my
concern
with
just
removing
the
l
limit
is
that
then
we
should
say
that
facing.
M
L
So
quick
connection,
control,
rc
says
I
think
pacing
is
a
must,
if
I
recall
correctly,
and
then
it
doesn't,
have
the
l
limit
in
the
specification
right
so
yeah.
If
you
want
to
remove
l,
then
you
have
to
make
facing
mandatory
that
may
not
work
for
all
implementations
on
iot
devices
etc.
Where
you
know
facing,
might.
L
I
Right,
you're,
absolutely
right
without
facing
there
needs
to.
I
mean
I'm
not
in
a
favor
of
l,
but
I
think
the
quick
draft
does
recommend.
I
don't
think
it
says,
must
my
memory
is
probably
not
accurate,
but
it
does
provide
an
alternative
which
is
kind
of
pacing
in
the
transport
stack
by
distributing
how
much
you're
sending
in
one
send
during
the
whole
rtt.
I
Maybe
someone
can
correct
me
but
you're
right
without
pacing,
there
can
be
a
l,
but
we
couldn't
reach
a
consensus
on
what
this
l
would
be.
Like
you
said,
windows
is
using
eight.
I
think
we
are
using
ten
and
the
draft
says
two
and
mark
is
going
to
based
on
my
you
know.
Whatever
he
has
replied
on
the
mailing
list,
he's
not
gonna
like
because
then
that's
just
you
know
we're
just
guessing
numbers.
E
E
I
have
one
clarification
question
so
in
my
understanding,
abc
is
used
for
slow
start
phase
and
then
congestion
avoidance
space
but
abc
is
now
used
recovery
phase,
my
understanding.
So
in
the
slide
you
mentioned
about
deepak,
I
it
is,
for
the
other
case,
not
a
recovery
case.
You
know.
E
I
I
think
if
you
read
the
abc
draft,
you
know,
for
example,
if
there's
a
rto,
then
there's
a
slow
start
recovery
and-
and
in
that
case
it
says
it
recommends
l
equals
to
two
in
certain
scenarios
and
other
scenarios.
It
recommends.
L
equals
two
one,
but
for
the
fast
re-transmit
scenario,
you're
right,
because
that
would
be
incongestion.
I
I
don't
think
we
use
abc
in
conjunction
avoidance
at
all,
because
that's
straightforward,
increased
by
one
mss
and
and
that
would
apply
to
the
loss
recovery
as
well
during
the
fast
retransmit,
but
not
during
the
rto
recovery.
O
Yeah,
I
just
wanted
to
confirm
that
the
quick
rfc
9002
says
either
you
must
piece
or
you
must
limit
l
to
with
n,
basically
and
the
choice
of
10,
to
go
back
to
the
question
of
whether
it's
a
magic
number
or
not
is
it's
not.
O
I
mean
it
is
short
of
a
magic
number,
but
that
magic
number
comes
from
the
iw10
draft,
which
kind
of
implicitly
allows
you
to
burst
10
packets
into
the
network
at
once
as
a
direct
result
of
that
draft,
and
so
you
know,
if
you're
going
to
pick
a
magic
number,
you
might
as
well
pick
one
based
on
an
existing
rfc.
It
also
matches
and
it
coincidentally,
the
initial
painting,
the
fq
basin,
quantum,
the
initial
fq
base
and
quantum
of
linux,
and
so
there's
also
experimental
evidence
for
it.
O
L
Praveen
I
think
like
in
so
we
should
recommend
pacing,
I
guess
and
then
fallback
should
be
an
l
value
and
I'm
perfectly
fine.
I
I
So
the
point
is
really
not:
there
is
10
or
2
or
whatever
it's
it's.
I
think
the
point
is
it
goes
beyond
what
abc
recommends
is.
Can
you
burst
the
whole
congestion
window
in
one
cent,
and
I
think
the
pacing
is
answered?
The
answer
to
that
is
pacing.
I
We
cannot,
I
think
we
shouldn't
be
bursting
a
whole
congestion
window
wherever
we
are
in
the
slow
start
phase.
So.
A
Okay,
so
I
closed
the
line
and
let's
say
we
continue
the
discussion
on
the
mating
list.
In
principle,
I
think
less
documents
are
better
than
more,
so
the
rest
of
the
time
is
neil
can
be
spent
by
neil
on
silent
clothes.
F
All
right,
so
I'm
gonna
try
to
share
my
share.
Well.
A
F
Here
we
go
all
right.
Can
everybody
see
that
yep?
Okay
great?
So
we
wanted
to
talk
today
about
a
a
an
issue
that
we
have
seen
in
production
and
a
particular
solution
that
we've
been
using
and
wanted
to
discuss
it
with
the
community,
and
this
is
about
an
this
feature
is
called
or
we
call
it
tcp
silent
close.
F
So
the
motivating
scenario
here
is,
if
you
imagine,
a
large-scale
internet
service,
where
it
has
tcp
connections
to
millions,
perhaps
hundreds
of
millions
of
cell
phones
and
then
the
server
machines
kernel.
Pcp
stack
closes
all
of
those
tcp
connections
quickly
due
to
one
of
two
kinds
of
situations:
either
one
the
server
application
exits
or
restarts
cleanly
for
updating
the
executable
or
updating
the
configuration
or
there
is
a
server
application
crash.
So
the
process
set
faults
and
dies,
and
the
kernel
closes
all
of
those
millions
of
tcp
connections.
F
So
in
either
of
these
cases
the
clean
exit
or
a
crash.
What
you
have
here
is
you
have
tcp
in
the
kernel
bursting
millions
of
fins
to
millions
of
cell
phones,
often
in
the
local
region
or
country.
F
So
obviously,
that
has
a
problematic
impact,
in
particular
there's
heavy
network
and
energy
resource
usage,
basically
to
find
in
contact
and
wake
up
those
millions
of
dormant
cell
phone
radios
and
then
there's
also
a
significant
impact,
because
the
cell
phone
network
then
has
to
cue
those
fins
and
repeatedly
retry
them
for
the
phones
where
the
radio
is
powered
off
or
the
phone.
F
So
it
might
be
retrying
those
fins
all
night
in
some
cases,
depending
on
the
the
cell
phone
system,
implementation,
and
that
has
the
potential
to
cause
cellular
networks
to
become
overloaded
and
in
some
cases,
fail,
and
so
the
concern
here
is
that
this
kind
of
scenario
is
a
potential
scalability
or
reliability,
vulnerability
for
some
cell
phone
networks
out
there.
F
So
is
this
specific
to
tcp
well
in
tcp,
this
problem
happens
in
particular,
because
the
responsibilities
are
typically
split
between
the
application
and
the
kernel,
and
here
we
have
a
case
where
the
application
crashes
or
exits,
and
then
the
kernel
still
has
the
state
for
that
socket
and
it
tries
to
finish
the
finac
finac
handshake
at
the
end,
but
can
do
this
in
a
very
bursty
manner
for
millions
of
connections.
Interestingly,
you
know
in
in
a
protocol
like
quick,
that's
typically
implemented
in
user
space.
F
You
probably
don't
have
this
problem
because
the
kernel
has
no
transport
state.
So
if
the
application
exits
cleanly
or
crashes,
the
kernel
has
no
state.
So
we
can't
actually
do
this.
You
know
burst
of
a
million
fins
or
the
equivalent,
and
so
there's
no
extra
network
load,
and
so
you
don't
run
into
this
problem,
but
in
general,
presumably
this
problem
could
happen
for
any
kernel
transport.
F
So
the
goal
of
the
tcp
silent,
close
effort
was
to
say
that
upon
application
exit
or
crash,
instead
of
sending
those
big
bursts
of
millions
of
fins,
we
want
to
basically
manifest
the
traditional
behavior
that
the
internet
has
seen
for
either
a
kernel
crash
or
a
machine
crashing
or
powering
off.
So
basically,
in
that
situation
we
have
all
of
the
connections
from
the
server
that
just
crashed
or
died.
F
They
go
silent
and
then
any
later
incoming
traffic
for
those
connections,
that's
sent
by
the
client
and
then
arrives
at
the
restored
or
rebooted
or
powered
online
server,
then
triggers
a
reset,
because
obviously
that
server's
kernel
has
no
state
for
that
connection.
F
So
we
still
want
it
to
be
the
case,
though,
that
upon
the
normal
or
healthy,
close
of
a
tcp
of
an
individual
server
tcp
connection,
we
want
to
go
ahead
and
use
the
traditional
fin
act.
Fanatic
handshake
at
the
end,
and
one
key
design
feature
of
this
approach
is
that
there's
no
new
network
behaviors
introduced
in
this
kind
of
silent,
close
approach,
because
basically,
applications
and
tcp
stacks
already
have
to
be
prepared
for
this
kind
of
silence
and
then
a
reset
if
they
try
to
later
contact
a
remote
endpoint.
F
You
know,
obviously
before
this
could
happen
with
a
kernel
crash,
a
machine,
that's
powered
off
a
cellular
or
wi-fi
link
that
is
no
longer
usable
because
the
user
is
too
far
away
from
an
access
point.
But
after
if
we
have
a
silent,
close
feature
in
place,
then
obviously
the
same
symptoms
can
be
manifested
with
an
application
crash
or
an
application
exit.
F
So
just
to
clarify
what
we're
talking
about
here.
If
we
compare
a
sort
of
server
process,
exit
or
crash,
you
know
before
with
the
traditional
tcp
implementation.
F
You
know
the
behavior
you
get
would
be
millions
of
fans
going
out
to
the
network
in
a
burst
to
all
these
handsets
being
retransmitted
many
times
or
retried
many
times
and
after.
If
you
have
a
sort
of
silent,
close
feature
and
there's
a
server
process
extra
crash,
you
don't
get
that
storm
of
fins,
and
so
what
is
the
precise
mechanism
that
we're
proposing
here?
F
So
the
idea
is
a
boolean
tcp,
silent,
close
per
socket
option
set
via
set
suck
ups,
get
via
get
succot,
and
when
this
feature
is
enabled
and
a
tcp
connection
is
closed
or
shut
down
for
any
reason,
then
the
kernel
sends
no
fan
since
no
reset
and
the
kernel
frees
the
socket
state
immediately
and
effectively.
As
said
before,
the
goal
is
that,
upon
application
extra
crash,
the
connection
manifests
the
traditional
behavior
for
a
kernel
crash
or
machine
power
off
the
connection
goes
silent
any
later.
F
Incoming
traffic
just
gets
a
reset
and
in
particular,
we've
used
this
option
for
years
at
our
team
for
a
limited
set
of
very
large
scale
services,
and
we
plan
to
send
this
code
upstream
to
linux
when
netx
opens,
I
believe,
next
week,
and
so
we
wanted
to
discuss
this
in
this
community
since
we
wanted
to
to
start
getting
this
in
the
open
source
world.
So
the
usage
model
is
pretty
simple.
F
You,
the
idea,
is
you
enable
silent,
close
on
a
on
a
listener
or
immediately
after
a
connection
is
established
after
except
and
thus,
if
the
process,
crashes
or
exits
at
any
time
during
the
normal
operation
of
that
connection,
you
don't
get
this
large
scale
burst
of
fins,
however,
when
the
process
wants
to
go
ahead
and
close
a
single
tcp
connection,
because
the
work
is
done
or
the
client
went
away,
the
idea
is
you
sort
of
disable
this
silent,
close
feature,
and
then
you
do
your
clothes
or
your
shut
down
so
that
you
get
a
normal
fin
act
handshake
when
everything
is
going
that
smoothly
and
cleanly.
F
You
know,
the
api
is
very
simple:
set
sock,
opt
pcb
sign
on
close
to
value,
one
to
enable
you
know,
zero
to
disable,
etc.
You
know
the
api
details.
We
would
suggest
that
this
is
at
least
for
now
guarded
by
a
system
so
that
the
system
administrator
has
to
explicitly
enable
this.
Since
it's
a
new
feature,
we
propose
that
the
child
server
sockets
inherit
from
the
parent,
so
that
you
don't
need
a
new
system
call
to
enable
this
on
every
new
connection.
F
To
reduce
overhead,
we
would
propose
that
that
it
overrides
the
eso
linger
setting
and
returns
immediately.
We
looked
at
some
other
alternative
semantics
or
considered
those.
F
So,
for
example,
we
could
say
that
perhaps
if
the
application
does
a
close
or
shutdown
system
call,
perhaps
that
always
attempts
the
finac
finic
handshake,
and
perhaps
you
only
would
have
a
silent,
close
feature
upon
process
exit
either
cleanly
or
due
to
crash,
but
we
decided
that
that
would
might
prevent
applications
from
programmatically
deciding
to
silent,
close
millions
of
connections
if
they
realize
they
need
to
to
manage
memory
or
handle
denial
of
service
attacks,
etc.
F
Usage
considerations.
We
do
encourage
people
before
deploying
this
to
really
consider
the
negative
possible
negative
impacts.
Obviously,
if
the
server
is
closing
the
connection
without
the
fins
that
could
leave
extra
memory
usage,
either
in
the
client
side
or
in
the
middle
box,
so
we
do
want
people
to
consider
that.
So
we
do
want
this
to
be
a
feature.
F
That's
used
carefully
and
only
to
prevent
outage
prevention
only
to
prevent
outages,
not
just
to
save
a
few
packets
here
and
there,
and
there
has
been
some
related
work
in
this
area
that
I
think,
helps
motivate
this
and
show
that
the
community
does
feel
like.
There
has
been
a
long-standing
need
for
this
kind
of
thing.
So
there
was
a
paper
from
connex
a
couple
years
ago,
called
asylum,
tcp
connection
closure
and
their
their
motivating
use
case
was
cellular
networks
as
well.
F
Although
their
angle
was
reducing
energy
consumption
and
signal
load
that
we
thought
was
kind
of
a
heavyweight
implementation,
because
it
required
a
new
option
on
the
wire
it
required
you.
So
you
have
to
upgrade
client
in
the
server
os,
and
then
you
have
to
upgrade
the
client
in
the
server
application
to
agree
on
this
enabling
this.
So
we
didn't
think
that
was
a
good
model,
there's
also
so
linger,
which
has
been
around
for
a
long
time
with
a
similar
motivation
like
practical
motivation
of
scalability,
avoiding
time
weight
state
on
servers
for
millions
of
connections.
F
We
thought
this
was
similar
in
spirit,
but
we
didn't
want
to
change
this
sort
of
long-standing
api
or
add
a
new
magic
number
to
that.
Existing
api
and
risk
compatibility
issues
there's
also
an
interesting
silent,
close
implication
of
the
linux
tcp
repair
mode,
which
is
for
connection
migration,
but
the
semantics
don't
really
line
up
with
what
you
want,
because
if
you
use
tcp
repair
mode,
you
basically.
F
Are
serializing
your
connection,
state
and
disabling
the
operation
of
that
connection
in
the
kernel,
which
is
not
exactly
what
we
want
to
do
here
here.
We
want
to
basically
be
able
to
continue
to
use
this
connection
and
just
say
if
that
application
exits
later
on,
then
we
want
to
have
that
asylum
close
for
that
connection,
so
yeah,
so
the
effort
status
we've
used
it
in
years
for
years,
and
we
wanted
to
open
source
this.
So
we
wanted
to
come
discuss
with
the
community
and
yeah.
We.
F
A
discussion,
although
I
see
we're
probably
out
of
time
but
maybe
on
the
list
people
can
connect
with
us
or
now
as
well.
Thank
you.
A
Thank
you
for
the
presentation.
We
are
basically
out
of
time,
but
maybe
three
quick
questions,
maya.
R
And
then
a
question,
I
will
be
brief.
So,
as
you
said
on
this,
where
you
talk
about
user
usage
considerations,
you
basically
have
a
trade-off
here
right.
You
try
to
reduce
network
load,
but
you
increase
load
at
the
other
side
and
maybe
middle
boxes
by
not
sending
something.
So
I'm
I'm
not
sure
what
I
think
about
this
idea,
because
I
don't
know
how,
to
rate
the
straight
up
right,
I
don't
understand
what
the
implications
are
for
for
all
the
involved
parties.
F
Okay,
thank
you
yeah.
So
it's
our
estimation
and
experience
that
the
the
extra
amount,
the
extra
load,
that
it's
imposing
keep
in
mind,
is
just
memory
in
the
middle
box
and
then
the
client
and
those
pieces
of
the
system
already
have
to
have
logic
in
place
to
do
garbage
collection
and
bound
that
state.
So
it's
our
sense
that
it's
a
net
safety
improvement
to
deploy.
Something
like
this
to
avoid
accidental.
F
You
know
massively
excessive
load
on
the
cell
phone
network
since
the
other,
since
the
players
that
have
to
do
with
the
extra
state
already
have
to
deal
with
this
issue
of
value
in
that
state.
A
Okay,
so
the
question
I
had
was,
I
understand
that
you
do
this
if
you
close
the
socket
or
the
close
of
the
socket,
because
the
process
died,
but
if
you
shut
down
the
reader
right
side,
I'm
not
sure
if,
if
you
do,
if,
when
you
do
that,
the
tcp
connection
should
go
away.
F
So
I
don't
think
this
changes
the
shutdown
of
the
read
side
on
the
right
side.
I
believe
my
recollection
of
how
we've
implemented
it.
J
F
J
J
F
And
I
think
that's
a
detail.
We
could,
if
people
think
if
people
see
problems
with
that,
we
could
obviously
discuss
that.
K
Thank
you
I'll
try
to
make
this
quick
because
we're
over
time.
I
see
the
arguments
for
this
and
I'm
not
saying
it's
a
bad
idea.
I
am
saying
it
needs
to
be
done
very
carefully.
I
think
there's
a
real
risk
of
unintended
consequences
here.
K
It's
very
easy,
like
in
the
research
paper
you
cited,
they
don't
send
the
finn
to
save
packets
and,
as
you
say,
this
puts
a
bigger
memory
burden
on
nats
and
firewalls
for
that
state,
and
that
also
has
a
consequence,
because
I'm
sure
you're
aware
right
now,
tcp
net
mappings
tend
to
have
a
much
longer
lifetime
than
udp
mappings
and
the
reason
that
gateways
could
do
that
and
firewalls
and
other
metal
boxes
is
because
they
can
assume
they'll,
see
a
tcp,
pin
or
recess
that
tells
them
when
a
tcp
connection
is
done.
K
K
Then
that
will
force
gnats
to
be
much
more
aggressive
without
scavenging,
apparently
idle
tcp
mappings,
just
like
they
do
for
udp
today,
and
then
you
might
end
up
having
to
send
10
times
as
much
keep
live
traffic
in
this
future
world.
So
this
apparently
well
motivated
move
to
save
one
fin
packet
has
the
consequence
that
we're
now
sending
10
times
as
much
keeper
live
traffic
24
hours
a
day
to
stop
these
connections.
F
Yeah,
I
agree
it's
something
we
have
to
carefully
consider
it's.
I
guess
my
sense
is
that,
hopefully,
these
these
not
boxes,
are
using
some
sort
of
lru
map.
You
know
lre
policy
for
eviction
anyway,
so
hopefully
it
would
work
out
in
practice,
given
that
the
given
the
intended
usage
model
but
yeah
there's
at
this
point.
K
We
have
the
reality
today
that
one
of
the
reasons
quick
is
not
suitable
for
long-lived.
Mostly
idle
connections
like
push
notification
connections
is
because
udp
mappings
have
to
be
aggressively
recycled
because
there's
no
explicit
fin
or
reset,
and
and
if
we
effectively
move
tcp
in
the
direction
that
it
also
has
no
explicit
connection
termination.
K
F
A
Thank
you
for
for
the
discussion.
Thank
you
for
attending.
I
suggest
to
move
the
discussion
on
the
mailing
list.
I
would
say
thank
you
for
attending
the
meeting
and
michael
yoshi.
You
want
to
say
a
final
word
or.