►
From YouTube: IETF114-TCPM-20220729-1400
Description
TCPM meeting session at IETF114
2022/07/29 1400
https://datatracker.ietf.org/meeting/114/proceedings/
A
B
C
Okay,
this
is
tcpm,
so
if
you
are
not
interested
in
tcp,
you
might
have
chosen
the
wrong
rule.
B
C
C
We
have
a
javascript,
yoshi
will
look
it
the
chat,
and
if
there
are
any
comments
he
will
make
sure
that
they
come
to
like
one
information,
if
you
are
writing
a
document
and
think
tcpm
is
a
venue
for
it
which
might
be
interested
in
it
then
use
tcpm
in
the
name
and
it's
on
our
radar
so
then
type
draft.
C
C
C
C
So
I
I
have
seen
the
presenters
I
think
mahesh
is
still
on
the
train,
he's
heading
to
a
hotel
and
once
the
document
once
the
presentation
is
up,
he
should
be
in
the
hotel
being
online,
so
that
should
work.
C
These
are
the
on
the
on
the
milestones
of
the
working
group.
The
tcpmao
test
vectors
has
been
published
since
the
last
ietf,
it's
rsc
9235
and
you
might
have
seen
a
couple
of
emails
regarding
errata.
So
there's
one
issue,
which
means
which
is
that
the
tcp
checksum
is
wrong.
In
the
example
packets,
when
using
ipv4
is
the
network
layer.
This
is
for
12,
packets,
so
they're
12
or
6
in
packets.
F
Yeah
spartan-
I
I
don't
want
to
like-
have
an
extended
debate
about
this.
I
think,
probably
the
least
I
mean
you've
already
filed
the
errata.
Thank
you
for
that,
but
I
think
the
least
aggregate
work
for
everyone
is
if
we
just
have
a
single,
if
you
just
do
the
step
of
filing
a
single
errata
for
all
of
them,
that
doesn't
that's
general
to
the
document.
F
Well,
let
me
think
I
guess
we
have
to.
We
have
to
tell
the
rfc
editor.
I'm
sorry
I'm
thinking
about
this
in
real
time,
but
I
guess
we
have
to
tell
the
rc
editor
to
make
all
the
changes.
So
actually
you
know
what
you
file
the
16
I'll
just
approve
them
all,
and
that
way
the
rfc
end
has
no
ambiguity
but
they're
going
to
do
so.
I
changed
my
mind
in
the
middle
of
my
comment:
let's
just
leave
it
like.
It
is.
C
C
Okay,
rfc
793
bis,
that's
the
tcp
spec
the
base
back,
it's
an
auth
48
and
there
have
been
a
lot
of
comments
from
the
rfc
editor
which
have
been
addressed
recently
by
wes,
and
so
the
document
is
moving
forward.
C
The
tcp
yang
document
is
after
working
group
last
call
in
the
isg
last
call.
A
couple
of
comments
have
been
raised
and
the
authors
are
working
on
addressing
them,
we'll
get
a
status
update
here
and
also
we
will
have
a
presentation
about
potential
additional
features
which
some
users
think
would
be
very
useful.
So
we
can,
then
I
think
they
are
maybe
suggesting
something
like
another
more.
C
Detailed
yang
module
the
next
two
documents
prr
and
the
next
one
is
cubic.
The
prr
is
we
have
a
presentation
on
that.
It's
still
in
progress,
the
cubic
one.
We
will
also
have
a
presentation
on
their.
We
are
past
working
group
last
call
but
got
substantial
comments
and
are
still
in
the
in
the
face
of
discussing
them.
We
will.
We
will
get
a
status
update
of
that
during
the
discussion.
Unfortunately,
the
person
raising
the
comments
couldn't
attend
this
meeting.
C
C
Accurate
ecn
is
also
on
the
list
of
drafts,
which
will
be
presented.
That's
ongoing
work
almost
ready
for
working
group.
Last
call,
I
would
say
we
have
generalized
ecm,
which
builds
on
that
and
tcp
edo,
which
has
not
had
lots
of
changes
related
to
the
last
meeting.
F
Yeah,
this
is
all
fine,
but
just-
and
you
know
I
realize
things
have
to
be
done
when
they're
done,
but
if,
if
at
all
possible
it's
to
if
it's
possible
to
prioritize
accurate
ecn
and
like
maybe
deliver
it
to
me.
Well,
I
don't
know
if,
if
it's
at
all
like,
if
you
have
two
documents
in
front
of
you
and
one
of
those
accurate
ecn,
please
process
equity
cn
first,
because
the
l4s
stuff
is
already
kind
of
going
through.
So
I
think
it's
a
little
it's
a
little
awkward
for
this
to
be.
F
F
So
so,
like
the
practical
implication
of
this
guidance
would
be
that,
if,
like
say
as
a
document
shepard,
you
had
two
documents
and
one
of
them
was
accurate
ecn
that
you
would
work
work
on
the
shepherd
write
up
for
accurate
ecn.
First,
this
this
dilemma
may
not
arise
like
you
may
just
have
a
document
in
front
of
you
and
then
accurate
dcn
is
the
only
thing
on
your
plate,
and
you
know
obviously
accuracy
it
has
to
be
done
first
and
it's
not
done.
F
C
Don't
think
anything
is
blocking
you
accurately,
so
I
think
the
the
thing
we
are
working
on
is
basically
is
cubic
yeah.
C
G
G
Yeah
this
is
bob
briscoe.
Yes,
that's
true.
I
just
wanted
to
ask
whether
we're
going
to
get
a
working
group
last
call
on
an
attempt
at
it
on
generalized
ecm
as
well,
because
that's
part
of
the
same
thing.
F
So,
like
rule
number
one
is,
it
is
ready
and
I'm
not
saying
like
can't,
stop
all
work,
so
we
all
just
sit
here
and
focus
on
ecn,
but
to
the
extent
that
there's
a
process
thing
where
you're
doing
one
one
working
last
call
at
a
time
or
one
separate,
write
up
at
a
time
or
whatever.
F
C
If
that's
not
the
case,
then
I
would
suggest
we
move
on
to
to
the
first
presentation.
C
We
have
15
minutes
spare
from
our
agenda
and
if
there
is
for
any
of
the
the
the
issues
we
have
discussed
fruitful
discussion,
we
are
willing
to
run
over
the
time
limit
in
in
a
certain
way,
but
still
trying
to
make
sure
that
all
presentations
can
be
happened.
So
then,
I
would
say
vidi.
H
H
H
So
there
are
some
issues
that
were
raised
by
markup
and
I've
been
exchanging
emails
with
him.
Regarding
some
of
the
issues
that
I
couldn't
really
figure
out
from
the
mailing
list
and
I'll
talk
about
it
in
a
little
bit
issue,
one
is
about
the
tcp
friendly
model
and
it
seems,
like
marco
said
that
the
tcp
friendly
model
used
by
cubic
is
not
correct
and
he
pointed
to
a
paper
I
think
wrote
by
bob.
So
maybe
we
will
take
this
after.
H
H
It
is
an
interesting
thing
I
I
do.
I
have
spoken
to
many
other
folks
in
parts
who
participate
in
tcpm
and
it
seems
like
you
know,
the
0.7
to
0.5
is
kind
of
like
related
to
the
doubling
in
the
slow
start,
and
when
you
double
it
and
you
see
a
loss,
you
have
it
you
kind
of
reach
the
basically
you
you
get
one
x
times
the
condition
window,
which
still
means
the
queues
are
going
to
be
full,
even
if
there
won't
be
a
second
round
of
loss.
H
So
it's
not
the
best,
but
it
still
avoids
probably
a
second
round
of
loss.
H
Having
said
that,
I
have
also
spoken
about
this
to
a
lot
of
folks
and
and
the
authors
and
my
and
in
my
personal
opinion,
is
that
changing
this
at
this
point
is
not
something
we
think
is
the
right
thing
to
do,
because
there
is
just
literally
no
deployment
with
0.5.
H
H
Then
there
was
issue
number
three
which
has
been
resolved
by
neil.
This
issue
was
about.
H
Basically,
you
know
it's
just
a
bug
where
we
were
initially,
we
were
saying
if
the
congestion
window
is
over,
w
max
then
set
alpha
cubic
to
one,
and
there
was
a
bug
here
which,
where
it
should
rather
say,
and
the
conduction
window,
is
higher
than
the
prior
condition
window.
So
neil
has
fixed
the
issue.
Thank
you
neo
for
that,
and
I
think
he
also
said
this
issue
was
fixed
in
linux.
H
Issue
number
six
was
about
the
implementations
that
still
use
congestion
window
directly
during
a
condition
event
to
do
the
reduction
and
there's
a
pr
open
for
this.
I
think
there
is
just
a
small
edit
left
in
this
one
as
mark
who
sent
an
email
about
it
yesterday,
but
I
think
this
is
something
we
can
resolve.
H
So,
as
I
was
saying,
I
did
email
him
to
ask
what
are
issue
number
four
and
five
and
he
replied
a
day
ago.
So
I
have
not
covered
that,
but
I
think
the
issue
number
four.
If
I
understand
it
correctly,
he
mentioned
that
0.5
should
also
be
used
when
we
are
in
congestion
avoidance
and
not
just
after
slow
start.
H
H
Actually,
there
are
no
open
issues
on
github,
but
there
are
open
issues
on
mailing
last,
but
there
are
pull
requests
and
github.
So
if
you
are
interested
in
reviewing
them,
please
go
ahead.
We're
trying
to
convert
as
soon
as
possible,
so
please
help
us
to
review
the
pull
request
and
march
into
the
draft.
H
There
is
support
for
publication
sps
on
mailing
list
as
well
as
on
in
the
last
meeting
and
then
for
issue
two.
I
just
noted
here
it's
it's
not
it
can
be
considered
for
future,
but
not
now
that's
what
that's
what
we
think
that's
all
I
have,
but
we
can.
B
I
So
richard
just
an
observation,
so
I
believe
none
of
the
current
implementations
of
cubic
are
pure
just
look
at
just
implementing
cubic
as
it
is
in
this
draft.
So
for
the
slow
start,
it
will
typically
always
have
some
kind
of
high
start
or
more
advanced
functionality
there,
if
perhaps
the
notable
exception
of
freebsd,
but
their
cubic
is
not
the
default
method,
so
maybe
it
would
be
worthwhile.
So
I
would
want
to
say
that
I
agree
with
your
statement
here
that
it
would
not
be
advisable
to
change
the
recommended
beta
value.
I
At
this
stage,
however,
perhaps
make
a
note
in
the
document
that
a
pure
slow
start,
as
you
know
in
the
in
the
past,
is
maybe
not
the
best
ideal
situation
to
determine
when
to
stop
slowstar
when
running
kubic.
Thank
you.
D
G
If
you
are
going
to
mention
anything
about
0.5
in
the
document-
and
I
don't
know
whether
that
is
needed,
given
the
way
richard
said
that,
but
if
you
are,
I
think
there's
also
potential
problems
with
0.5
in
that
you
then
get
more
queue
variation,
so
you
get
more
delay
variation,
and
so
you
know
that's
that's.
Obviously,
something,
as
you
say,
has
not
been
measured
and
if
it
was,
I
think
you
you'd
find
things
were
worse.
B
You
see
from
the
floor,
this
is
my
individual
command
for
each
one
and
each
one
is
kind
of
overshooting
issues
rostered,
but
I
think
you
know
this
is
the
issue
for
roster,
not
the
security
cubic.
If
we
use
cubic
it's
more
visible
compared
to
rena,
but
even
we
know
we
have
overshooting
issues,
so
I
think
the
problem
should
be
fixed
in
slow
start,
not
a
cubic
which
all
we
can
do
in
cubic
is
just
tuning.
But
at
this
point
I'm
not
sure
if
we
need
to
do
that.
E
C
On
the
chat,
rodney
is
asking
whether
you
can
run
high
start
plus
for
this
cubic.
H
Yes,
people
are
running
that
already.
It's
also
mentioned
in
the
draft.
D
J
Stuart
cheshire
from
apple,
so
my
quick
comment
is
this
is
largely
documenting
what
is
incredibly
prevalent
on
the
internet
today.
So
if
somebody
wanted
to
propose
a
cubic
version,
2
with
a
different
beta
parameter,
that's
fine
as
a
proposal
that
the
itf
would
discuss.
But
that
is
really
not
the
purpose
of
this,
and
this
is
reflecting
the
reality.
As
several
people
have
said,
the
reality
has
had
the
benefit
good
or
bad
of
lots
and
lots
of
testing.
J
This
proposed
alternative
would
take
years
of
study
to
get
to
the
same
level
of
understanding
about
how
it
behaves
at
the
large
scale.
B
Okay,
any
more
comment
and
then.
H
H
B
Yeah
for
each
one
is
our
model,
and
so
I
have
some
you
know
mark
and
I
have
been
a
discussion
and
then
I
I
have
some
comment
and
mike
mark
has
some
comment
and
then
we
exchange
a
lot
of
comment
each
other.
Then
right
now
I
have
some
common
argument.
His
comment,
but
for
babies
come
to
argument.
Contrary,
I
will
have
a
counter
argument.
I've
contacted
me
to
comment
argument.
So
the
conclusion
is
this
is
not
the
easy
issue.
B
That's
a
conclusion
from
my
from
this
point,
and
that
means
in
order
to
settle
down
this
issue.
We
need
more
detail
analysis.
Otherwise
we
cannot
settle
down
discussion,
but
at
this
point
I
don't
know
if
we
should,
you
know
wait
for
the
conclusion
of
the
discussion
may
take
years
for
detailed
analysis
and
if
our
purpose
is,
you
know
like
publishing
the
cubic
draft
in
2025.
This
is
a
perfect
version
of
tubing.
Then
we
can
wait,
but
this
is
I
don't
know.
This
is
what
we
want.
B
Basically,
so
if
that's
just
says
no,
if
we
want
to
make
a
cute
version
too,
it's
totally
fine.
We
should
do
it,
but
not
this
draft.
I
think
that's,
I
think,
that's
the
you
know
general
consensus
of
the
working
group.
We
have,
you
know
analyze
the
consensus
in
the
previous
meeting
and
then
we
see
sorry
the
concerns
that
that
we
are
going
to
publish
this
draft
as
a
proposed
standard,
and
I
don't
see
any
you
know
big
opposition
to
change
current
situation.
B
From
my
point
of
view,
and
the
one
thing
I
want
to
emphasize
is:
the
cubic
draft
is
not
a
threat
for
the
internet.
If
we
deploy
cubic
there
is
no
congestion.
Crap
happen.
Nobody
says
that
so
what
we
are
arguing
is
if
we
compare
renault
and
the
cubic
and
then
cubic
might
be
aggressive
and
then
some
people
say
it's
not
true
aggressive.
It's
maybe
really
aggressive,
but
some
people
say
it's
drastically
aggressive,
but
this
is
you
know
under
discussion.
B
We
don't
see
any
conclusion
yet,
but
in
order
to
see
the
conclusion
we
need
more
analysis,
not
the
kind
of
situation
and
then
at
this
point,
does
people
want
to
wait
for
the
data
analysis
of
the
result?
What
we
want
to
publish
and
then
let's
discuss
about
the
qb
version
too,
which
one
people
want.
That's
the
point
of
the
discussion
in
my
understanding
and.
C
B
H
D
H
K
D
L
Yeah,
I
am
going
to
say
again
what
I
said
last
time
of
the
night.
I
don't
like
this
position.
I
think
we
should
have
done
respect
differently.
This
has
been
deployed.
It's
the
one
we're
using.
I
think
we
should
publish
this
as
a
ps.
I
think
we
should
note
down
this
important
discussion,
we're
having
now
as
part
of
that
ps
and
we
should
keep
progressing.
F
B
F
Well,
all
right
well
with
issue
one
I
think
the
case
was
made
that
was
too
complicated
to
discuss
in
person
and
I've
not
read
the
email.
So
I
will.
I
don't
know
issue
two
like
I.
I
think
I'm
gonna
concur
with
the
other
people.
That
0.7
is
what
we
deployed
and
0.5
is
very
researchy,
and
I
think,
like
opening
a
research
topic
is
not
appropriate
for
this
effort.
Yeah.
C
So
my
position
as
an
individual
is:
we
should
spend
some
time
on
discussing
the
issues,
because,
if
it's,
if
it's
a
mistake
in
the
specification,
we
can
fix
it
as
the
one
new
fixed.
C
If
we
have
something
like
the
dot
zero
versus
dot
five,
I
would
be
happy
with
documenting
that
people
are
now
using
5.7
and
give
a
short
description
why
there
was
a
discussion
that
dot
five
might
be
appropriate
or
the
better
value
or
whatever,
but
it
is
about
documenting,
what's
now
being
used,
whether
that
is
related
to
a
mistake
or
not
doesn't
matter.
C
I
I
believe
we
should
be
documenting
in
this
draft
what
has
been
deployed
where
we
have
a
lot
of
experience
already.
We
have
known
that
we
haven't
corrupted
or
collapsed
the
entire
internet
and
going
at
this
stage
to
0.5
and
holding
this
document
for
an
extended
period
of
time,
then,
quite
frankly,
nobody
will
actually
change
the
beta
parameter.
J
J
G
Okay,
I
I
just
want
to
switch
to
issue
one.
I
think
issue
is
stunned.
From
my
point
of
view.
All
I
will
say
is
that
the
model
I
wrote
up
was
for
tailed
well
for
tail
drop
and
aqm,
and
I
couldn't
acrom
is
more
difficult
because
it
depends
on
which
one
it
is
and
for
the
aqm
model
results
we
showed
yesterday,
comparing
prague
against
cubic
and
prague
against
reno
gave
exactly
the
same
results
in
which
over
a
range
of
link
rates
and
round
trip
times.
G
So
you
know,
I
can't
believe
that
no
one
else
has
done
results
like
that,
but
we
have
results
as
of
now
that
show
that
reno
in
its
tcp
friendly
mode
with
an
aqm,
is
identical
to
cubic
within
the
ability
of
the
human
eye
to
see
differences
in
graphs.
H
So
so,
yes,
that's
true,
and
then
bob
presented
that
with
those
results
yesterday,
if
somebody
wants
to
look
at
it,
those
are
available
on
the
iccrg
slides
about
this
0.7
to
0.5.
I
want
to
reply
to
michael
about
about
whether
it
was
a
mistake.
I
don't
think
there's
pros
and
cons
to
things,
probably
in
slow
start,
it
is
a
little
bit
maybe
takes
two
rounds
to
reduce
the
to
basically
get
below
the
you
know
the
basically
avoid
second
loss.
It's
it's
for
that.
H
That's
the
con
and
I,
I
think,
probably
in
slow
start.
There
is
no
pro,
but
I
have
to
think
thoroughly,
but
in
congestion
avoidance,
it's
the
queueing
delay
variation
is
lower
when
you
use
0.7,
because
your
reduction
is
smaller.
That
means
you
were
you
need.
You
need,
don't
need
as
deep
buffers
as
new
reno
need,
and
perhaps
it
also
makes
sense
for
slow
start,
because
if
you
have
less
deeper
buffer,
then
this
slow
start
will
also
kind
of
have
low
delay
variation.
D
H
M
Yeah
hi,
can
you
hear
me?
Yes,
okay,
great
yeah.
I
just
had
a
a
few
comments
to
follow
up
on
a
few
of
the
things
said
recently.
I
think
there
was
a
statement
made
that
probably
no
one
would
change
the
beta
value
in
practice.
I
I
would
tend
to
disagree
with
that.
I
think
there
are
definite
issues
with
the
0.7
versus
0.5
and
I
think
in
the
future
I
could
see,
for
example,
linux
deploying
something
different.
M
You
know
after
appropriate
research,
but
I
agree
with
the
consensus
in
the
room
that
for
this
document
we
should
document
what's
deployed
already,
rather
than
embarking
on
a
research
expedition
before
publishing
this
in
terms
of
bob's
remark
about
the
tests
recently
showing
reno
and
cubic.
Look
the
same.
If
I.
E
M
Correctly,
those
tests
were
reno
by
itself
and
cubic
by
itself,
but
I
don't
think
we
would
expect
that
to
show
the
issue
one.
I
think
issue
one
is
about
the
interaction
between
reno
and
cubic
interacting
in
the
same
queue
where
the
difference
is.
Basically
the
cubic
increases
at
sea
wind
every
other
round
trip
time
and
is
therefore
you
know,
potentially
less
likely
to
see
packet
loss
than
reno
is
when
sharing
the
same
queue.
So
I
think
we
might
need
more
more
testing
really
to
to
understand
issue
one
more
deeply.
M
But
again,
I
think
that's
something
that
should
be
put
off
to
the
future.
We
should
just
document
what
cubic
does
now
and
then
I
do
another
question
earlier
was
do
we?
M
C
My
point
was
please
document
what
is
out
there
and
in
the
case
there
is
a
discussion
about
whether
anything
else
is
better
or
not
or
this
kind
of
stuff
we
might
want
to
document
it,
but
we
don't
want
to
discuss
this
for
years
to
figure
out.
What's
what
is
what
is
better?
So
it's
about
documenting
what
is
out
there.
C
G
It
was
a
comeback
on
what
neil
said
so:
yeah
yeah,
so
bob
briscoe
yeah
neil.
There
were
results
of
prague
versus
cubic
and
pride
versus
reno
in
the
same
scenario,
so
you
could
see,
compare
how
reno
fights
a
different
congestion
control
and
how
cubic
does
as
well
in
an
aqm
and
they're,
also
cubic
with
ecn
versus
cubic
without
and
cubic
versus,
reno
and
so
on.
So
it
wasn't
just
one
flow
on
its
own.
F
F
This
document,
maybe
making
some
fixes
and
correcting
you
know
bugs
so-called
bugs
and
what
what
is
how
it
is
implemented-
and
I
think
the
right
way
to
think
about
this
is
that
we
have
a
bunch
of
cubic
implementers
in
the
room
and
I
think,
if
in
relatively
short
order,
we
able
to
reach
the
rough
consensus
that,
like
we
really
should
do
this
with
cubic.
F
And
you
know
we
will
probably
go
back
and
do
this
in
our
implementations
that
it
would
be
appropriate
to
put
that
in
the
document
and
make
maybe
make
a
note
that
some
implementations
might
not
do
this,
because
you
know
the
previous
consensus
was
x
and
that's
fine,
I'm
not
suggesting
we
start
research
projects
in
this
area
that
that
could
be
a
different
document
but
like
if
everyone
says.
Oh,
this
number
you
know
was
four,
but
it
actually
should
be
five,
and
that
was
just
a
dumb
thing.
F
It's
a
bug
and
we
all
sort
of
agree
with
that.
Then
we
can
absolutely
make
changes
to
what
is
deployed
out
there.
I
hope
that's
helpful,
but
I
think
that's
the
right
principle
to
apply
to
these
sort
of
tensions.
Thanks.
E
B
Describing
the
issues,
but
what
I'm
wondering
is,
if
we,
you
know,
describe
the
detail
of
the
issuance
and
then
I'm
afraid
that
document
is
getting
bigger
and
bigger
and
complex.
B
So
maybe
I'm
thinking
about
you
know
we
can
write
about
some
simple,
no
simple
information
document
describe
this
documentation
and
then
and
motivate
people
to
do
some
more
experiment.
That
might
be.
You
know,
from
my
point
of
view,
might
be
preferable
to
describe
this
specific
issue
specifically
detail.
C
Thank
you
thanks
for
the
discussion.
Please
continue
discussing
this
on
the
mailing
list
and
get
the
issues.
B
B
C
No
one
does
this
right
now:
okay,
thank
you,
then
I
would
suggest
we
move
on
to
u-turn
regarding
prl.
C
K
Should
could
you
run
the
slide?
Thank
you.
K
Thanks:
okay,
I'm
here
to
present
the
second
revision
of
the
rfc
6937.
This
please
next
slide.
K
So
what
is
rc
6937
is
to
remind
people
is
the
prr
proportional
rate
reduction
for
tcp
congestion
control,
specifically
during
fast
recovery.
Basically,
it
decides
what
the
sea
would
and
how
much,
how
fast
to
send
during
the
fast
recovery.
So
it's
a
kind
of
mini
congestion
control.
Well,
it's
mini.
It's
actually
can
be
used
very
very
often
when
a
connection
is
experiencing
very
frequent
losses.
K
It
was
published
in
2013
as
an
experimental
and
only
implement
by
linux
at
that
time,
without
rock
and
pop,
so
it
uses
the
previous
rc
3517.
The
conservative
stack
based
recovery
at
that
time.
K
So
over
nine
years
we
have
done
some
large
experiments
through
real
deployments
and
revised
the
algorithm
several
times,
and
it's
now
default
in
several
stacks
here
on
linux,
esd,
netflix,
rack,
stack
and
windows,
and
I
want
to
emphasize
one
thing:
is
that
this
sort
of
fast
recovery,
congestion
control
is
actually
the
default,
no
matter
what
transition
control
module
you
use
in
linux,
except
bbr,
but
bbr
has
a
lot
of
this
shirt,
sort
of
principle
or
algorithm
like
rc
6937
as
well.
So.
J
K
K
So,
let's
talk
about
the
most
important
improvement
of
the
original
algorithm,
that
is
when
the
in-flight
drops
below
ss
stretch
right.
So,
for
example,
if
your
ceiling
was
20
and
you
use
reno
and
your
ssres,
the
ring
congestion
control
is
lower
to
10,
but
your
in
flight
has
dropped
to
say
three
or
two
like
below
that
10
number.
This
is
when
the
original
algorithm
asks
you
to
hand
pick
two
different
algorithms.
K
One
is
more
aggressive,
the
other
is
more
conservative,
but
it
comes
with
pros
and
cons.
So
let's
look
at
that.
The
aggressive
version
is
you
during
that
time
you
slow
start
right,
since
your
c1
is
below
excess
stretch.
Obvious
upside,
is
that
it's
a
lock
and
recovery
because
it's
slow
starting?
The
downside
is
that
if,
during
this
loss
fast
recovery,
the
buffer
remains
very
fall,
it's
not
just
a
single
burst
drop
or
you
are
going
through
a
policer
which
runs
out
of
tokens.
K
That's
why
it's
essentially
dropping
any
excess
rate
that
you
send
it
through
that
policer.
Then
this
could
result
in
terrible
losses
because,
literally
you
are
pouring
gas
to
fire.
So
it
will
be
like
keep
repeating
shelling
sense,
trying
to
ramp
up
at
twice
the
the
speed
that
the
rink
is
draining
and
those
just
keep
getting
lost
very
easily.
K
If
you
don't
have
rack
that
results
in
repeated
timeout,
because
you
run
out
of
any
ad
clocking
and
but
on
the
other
side,
if
you
pick
the
conservative
one,
basically,
it's
a
strict
packet
conservation.
K
When
you
get
a
packet
sacked,
you
send
one
more
into
it:
obvious
downside,
linear
recovery,
time
and
packet
losses
and
round
trips
for
large
congestion
window.
K
K
So
the
improvement
really
is
to
dynamically
pick.
This
based
on
the
situation
that
what
the
next
act
or
the
the
most
recent
act
indicates
by
default.
We
want
to
be
conservative,
but
if
the
last
act
indicates
that
hey
the
repair
is
in
good
progress,
meaning
that
the
same
una
is
advancing.
That
means
your
last
retransmit
has
been
delivered
successfully
right
and,
more
importantly,
since
then
una
is
advancing.
That
means
receiver
application
is
making
progress
as
well
that
he
can
reach
more
and
also
this
act
does
not
further
indicate
packet
losses.
K
K
Another
smaller
issue
is
that
the
original
one
doesn't
really
define
nang
sac
case.
What
do
you
really
do
so
here
we
apply
some
very
simple
technique
that
was
actually
implemented,
but
just
didn't
get
documented,
which
is
if
the
act
is
a
due
back
right.
Without
fact,
all
you
get
is
due
back
and
you
simply
just
assume
that
you
know
one
packet
has
been
acknowledged
or
delivered.
K
Of
course,
this
comes
with
that
famous
caveat
that
neo
found
years
ago
that
hey
what
if
I
just
send
you
back
every
bite
I
receive
so
here
we
add
some
more
protection,
meaning
that
we
assume
that
you
cannot
act
more
than
whatever
that
has
been.
That
was
in
flight
right
as
a
sort
of
protection
to
this
attack
to
some
degree.
So
this
accounting
change
will
allow
downsack
to
also
use
pr
very
easily
and
doesn't
really
require
any
extra
state
next
slide.
Please.
K
Other
minor
issue,
the
original
rc,
does
not
trying
to
force
fast
retransmit
upon
entering
the
recovery,
for
example
in
the
original
algorithm
that
you
have
a
sencondstay
variable
to
decide
how
many
packets
you
are
allowed
to
send
for
the
given
act
that
you
just
processed
and
it's
possible
it's
zero
because
you
want
to
make
sure
it's
proportionally.
K
The
rate
is,
according
to
the
new
asset
stretch,
whatever
the
sort
of
guiding
congestion
control
says
that
what
what
the
new
window
should
be.
This
has
an
obvious
downside
is
that
you
could
potentially
lost
ad
clock
because
you
don't
know
if
there's
gonna
be
more
act
coming
right.
What
if
there's
only
one
packet
that
survived
this
stone?
So
here
we
added
this
well
for
the
fast
retransmit
only
once
during
recovery,
and
that
fast
return
is
also
accounted
in
the
out
delivery
package.
K
So
the
algorithm
just
make
a
little
bit
tweak,
but
fourth,
one
out
to
keep
the
ad
clock
going.
This
was
originally
implemented
by
linux.
It
just
didn't
get
documented
in
the
rc,
so
we
put
it
in
there.
Another
one
is
like
the
original,
obviously
didn't
even
define.
K
How
do
you
calculate
the
seaweed-
and
here
we
define
it
like
it's
simply
the
in
flight
or
the
pipe
plus
the
cent
state
variable,
so
that,
with
this
ceiling
calculation,
you
would
send
exactly
the
same
time
number
of
packets
out
to
make
it
more
clear.
Next
slide,
please
other
minor
edits.
We
also
recommend
that
you
should
use
rack
tlp,
so
that
remember
we
talked
about.
We
want
to
make
the
algorithm
more
dynamic
and
we
want
to
say
hey.
Does
this
act
indicate
further
losses?
K
K
We
use
a
recommended
because
there
could
be
other
techniques
to
detect
that
rack
top
is
only
one
better
detection
algorithm
and
remove
some
deprecated
way
having
text
and
the
experiment
section.
Since
now
the
extreme
have
been
concluded
and
update
the
examples
to
reflect
the
new
algorithm.
We
also
noticed
that
linux
had
a
bug
that
was
reported
by
bob.
Thank
you
bob
about
the
original
implementation,
so
we
also
send
a
linux
patch
to
fix
that.
K
B
So
I
think
I
sent
some
kind
of
review
comment
on
the
mailing
list
a
couple
of
months
ago,
and
it
seems
the
new
version
seems
to
be
under
some
of
my
comment.
But
if
you
could
respond
to
my
review
email,
then
this
one,
this
point
you
guys
updated
or
this
coin.
You
know
it's
not
necessary
to
update
and
so
on.
If
you
could
do
that,
that
would
be
very
helpful
for
me.
K
C
I
So
this
is
richard.
I
just
wanted
to
say
that
I'm
very
happy
that
this
now
finally
seems
to
be
progressing
again.
I
would
want
this
to
become
going
through.
A
working
group
last
call
quite
rather
quickly
having
implemented
the
pr
on
the
on
the
old
draft,
and
I
would
like
to
improve
the
especially
the
heuristics,
but
would
want
also
to
have
the
proper
rfc
at
the
time
so
that
it
can
be
made
upstream
right.
Thanks.
C
Okay,
I
think
the
next
speaker
is
bob.
G
G
A
small
three
bit
field
is
essential
and
the
supplementary
option
and
having
this
feedback
allow
gives
you
the
fine
grained
control
that
allows
you
to
reduce
delay
a
lot
more
we're
using
nl4s,
but
it's
got
other
uses
as
well.
Thank
you.
Next.
G
So,
since
the
last
cycle
of
the
I
or
within
the
last
cycle
of
the
itf,
it's
been
two
updates
links
there
on
the.
G
If
you
go
to
the
slide
for
the
two
diffs
and
also
an
english
summary
of
the
diffs
on
the
list
in
response
to
requests
from
gory
from
ilpo
from
again
or
a
follow
until
the
change
to
oppo
and
finally,
some
work
that
was
happening
earlier
in
the
week
on
the
interop
we
found
we
hadn't
properly
documented
the
experimental
ids,
the
tcp
options
and
two
different
implementers
had
guessed
two
different
numbers.
G
So
we
you'll
see
we've
sorted
that
next
slide.
G
So,
firstly,
gary
wasn't
happy
with
the
section
on
act
filtering
which
updated
rsd
3449,
which
is
as
the
little
asterisk
says
at
the
bottom.
Tcp
performance
implications
of
network
path,
asymmetry
and
it
particularly
updated
the
act
filtering
part
of
that
rfc
and
gauri
pointed
out,
and
he
was
right
that
that
rfc
referred
to
3168
and
so
because
accurate
ecn
says
it
is
going
to
update
3168.
G
That
rfc
will
then
apply
to
accurate
ecn
as
well.
So
we
don't
need
to
specifically
update
it.
It
just
automatically
happens
by
updating
rfc
3168,
so
we
change
things
around
and
we
added
a
bit
more
technical
detail
on
how
to
do
it.
How
a
filtering
node
might.
G
Handle
accurate
ecn
feedback
if
it
was
trying
to
improve
improve
performance,
which
is
the
whole
point
of
act,
filtering
nodes.
Okay,
any
questions
on
that
move
on.
G
So
the
second
change
after
el
po
had
implemented
this.
He
said
the
the
implementation
for
sending
the
tcp
option.
The
accurate
ecm
tcp
option
was
much
simpler
than
the
receiving
side
of
it,
and
so
the
the
previous
recommendation
in
the
draft
was
that
you
you're,
recommended
to
at
least
do
the
receiving
side.
Even
if
you
don't
do
the
sending
side.
G
Obviously
I
recommend
it
to
do
both,
but
if
you're,
if
you're,
trying
to
start
out
do
the
receiving
side
first,
and
so
we
switch
that
to
be
do
the
sending
side
first,
because
that's
the
easier
side,
and
that
means
that
anyone
that
does
do
the
receiving
side,
if
they're
getting
this
option
arriving
at
them,
then
they
sort
of
can
unilaterally
get
it
working
by
just
implementing
the
the
receiver
side.
This
means
the
receiving
of
the
option,
not
the
risk,
the
data
receiver,
because
it's
the
the
option
is
feedback.
G
Okay
and
this
this
cycle
in
draft
20,
we
just
added
a
little
more
strength
to
the
recommendation
as
to
why
it's
important
to
to
implement
this
and
you'll
see
the
green
text.
There
says
that
I
won't
read
it
out
any
comments
on
that
would
have
been
on
the
list.
G
G
If
that
does
go
through
to
the
main
line,
then
most
linux
servers
will
be
able
to
handle
both
sender
and
receiver,
which
is
another
argument
for
if
you're
a
client
receiver-
and
you
don't
really
want
to
bother
implementing
this,
at
least,
if
you
do
the
sending
side,
then
the
server
will
be
getting
feedback
on
on
the
downstream
at
least
okay.
Next.
G
Right,
this
is
the
point
about
us
omitting
to
register
the
experiment:
mental
ids
that
we've
been
using
for
this
tcp
option
in
implementations
retrospectively,
registered
them,
or
richard
did
earlier
this
week,
they're
now
on
when
as
of
wednesday
evening,
I
think
they're
on
the
iona
registry
as
shown
there.
G
But
then
what
we
want
to
do
now
is
go
for
an
early
registration,
no
early
assignment.
I
think
it's
called
of
of
the
actual
ids
that
we
want
to
use,
while
this
is
in
in
parallel
to
this,
going
through
the
working
group
last
call
process
and
and
so
on,
so
that
the
implementations
can
start
using
the
real
thing
next
and
the
next
is
the.
G
Next
step
slide,
we
have
a
one
early
security
error
review
that
ends
up
showing
has
issues,
but
it
was
all
resolved
and
the
the
author
of
that
review
has
agreed
it's.
G
It
would
be
almost
ready
if
he
put
that
stages
in
again,
so
I
believe
we're
ready
for
working
group
last
call,
I
don't
think,
there's
anything
else.
I've
seen
on
the
list,
anyone
wanting
anything
done
and
well.
The
authors
are
all
happy
that
everything
is
done.
That
should
be
done,
so
I
don't
know
whether
we're
gonna
do
that
here
now.
G
I
just
wanted
to
also
add
the
other
two
points
there
we're
gonna,
I
don't
know
whether
we
need
or
the
chairs
need
a
feel
from
the
room,
whether
we
should
go
for
an
early
assignment
as
well
and,
finally,
whether
generalized
ecn-
I
mean
that's,
that's-
been
ready
for
working
group
last
call
for
some
time
whether
we
go
for
actually
doing
that
it
has
a
dependency
on
accurate,
ecm.
F
Martin,
duke
google
yeah,
like
early
allocation,
I
think,
is
a
good
idea
because
if,
if
it
turns
out
that
whatever
we
allocate
gets
eaten
in
the
internet,
that
would
be
good
to
know
before
we
published
before
before.
It
goes
to
the
rcn,
and
I
don't
know
if
their
magic
like
unallocated
options,
that
would
make
it
through
the
internet
or
not.
F
G
Well,
the
the
the
reason
this
issue
came
up
is
neil
was
testing
his
linux
implementation
with
interrupt
with
richard's
freebsd
implementation
at
the
interop
earlier
in
the
week.
So,
yes,
neil
is
the
guy
that,
but.
F
G
Well,
I
mean
with
ilpo's
original
patches,
he
produced
a
patch
set
for
the
net
dev
community
and
basically
it
was.
It
was
all
ready
there
weren't
any
problems,
but
waiting
for
the
iatf
to
do
the
rfc,
and
I
mean
neil,
is
one
of
the
people
I
mean
the
lens
community
tends
to
work
on
trust.
You
know,
and
neil
is
one
of
the
people.
That
would
be
a
name
against
that.
G
I
think
I'm
not
okay,
presuming
that
neil
will
say
it's
okay
or
anything,
but
you
know
okay,
neil,
but
you.
C
Quick
question:
is
the
linux
community
waiting
for
the
for
the
rfc
or
waiting
for
the
option?
Kind
assignments.
G
You'd
have
to
ask
them
sorry,
but
when
we
originally
put
it
in,
it
was
waiting
for
the
itf
approval.
It
wasn't
clear
whether
like,
if
it
got
through
working
group
last
call
or
if
it
got
approved
by
the
aisg
or
whether
it
would
actually
have
to
you
know,
be
a
published
rfc.
I
don't
know
at
what
point
they
would
be
happy,
but
we
can
find
that
out.
K
Yeah
a
couple
things
I
think
we
definitely
should
get
a
real
auction
id
as
early
as
possible
and
don't
use
the
experimental
id
in
implementation.
K
K
G
My
question
yeah:
that's
a
good
point
we
can
we
can.
I
have
to
think
how
we
would
word
that,
but
because
we
have
to
be
careful
that
we
don't
sort
of
endorse
using
dc
tcp
over
the
internet
somehow
but
well
well.
Well,
we
should
be
able
to
do
that.
Yeah.
G
I
Just
to
the
comment
around
generalized
acn
for
data
center
tcp,
I
have
a
private
patch
to
do
this
exactly
because
of
the
same
reason
that
aqm,
that
is
compatible
with
l4s
would
mostly
be
compatible
with
database
and
the
tcp
as
well,
and
therefore
generalized
ecn
and
data
center
tcp
would,
in
my
opinion,
be
a
natural
fit.
I
G
Do
you
want
to
no
okay?
So
I
just
wanted
to
add
that
the
dc-tcp
implementation
in
linux
already
does
what
generalized
ecm
says
it
just
sets
ect
on
all
packets.
It
doesn't
not
set
it
on
sins
and
acts,
and
things
like
that.
K
F
He
is
it
the
secretary
of
state
ryana
that
you
do
early
allocations.
First,
who's,
the
first
contact
yeah
just
call
just
email,
ayanna
and
cc
me,
and
things
will
move
forward
and
we'll
figure
it
out
from
there.
F
I
Have
the
answer
to
this,
I'm
being
an
australian,
so
I
started
the
process
earlier
on
an
information.
I
Sorry,
I'm
richard
I'm
from
austria,
so
I've
started
this
process
on
an
informal
basis.
Jana
is
aware
that
this
is
going
going
on
and,
quite
frankly,
the
process,
as
far
as
I
understood,
is
that
the
formal
request
has
to
come
from
the
chairs
of
this
group
after
the
group
has
agreed.
F
B
Well,
one
question
bob,
so
I
think
the
draft
is
most
ready
for
working
group
plus
to
go
from
my
point
of
view,
but
I
sometimes
you
know,
exchange
email
with
you
and
gory
about
some.
You
know
editorial
things
I
I
just
would
like
to
check.
We've
already
settled
down
and
you
can
you
think,
you're
ready
for
working
with
rascal.
G
Sorry,
did
you
say
that,
are
you
saying
that
there's
some
emails,
we've
missed
or.
G
No,
I
think
the
two
can
go
in
parallel,
because
you
know
I
don't
think
either
needs
to
depend
on
the
other.
K
Say:
okay,
could
I
release
the
mic
button
yeah.
I
just
want
to
double
check
that
we
accurate
ecn
is
now
have
a
working
implementation
that
can
show
it
works
with
gro
and
tso.
Without
any
issues
like
if
we
just
run
it
now
say
inside,
they
are
centers
right.
The
tso
gro
will
all
work.
Fine,
no.
K
E
G
Doesn't
I
believe
it?
This
is
bob
sorry,
I
believe
it
doesn't.
I
believe
it
should,
but
I
you
know
yes
obviously
test
it.
Yourself
is
the
is
the
answer
to
that
question,
but
yeah
the
code
should
work
like
that.
Yep.
K
Okay,
so
that
has
been
my
most
concern
of
accuracy
yeah,
but
and
if
that's
clear,
then
I'm
happy
to
support
that.
C
L
M
Yeah
I
just
wanted
to
follow
up
on
youtube's
questions
about
offload
support.
I
guess
the
the
one
big
question
I
remember
was
this
question
of
whether
various
network
devices
in
their
tso
transmission
offload
facilities
might
might
do
the
wrong
thing
with
the
ace
bits,
since
they
don't
know
that
they're
trying
to
be
used
as
ace
bits
and
might
interpret
them
as
other
tcp
flag
bits.
Is
there
anyone
who
can
sort
of
quickly
summarize
where
we
are
with
this?
This
question
of
nick
compatibility
within
the
ace
field.
G
I
can
I
can
summarize
that
the
when
tso
and
gso
or
gso
is
done
in
the
linux
stack.
That's
all
handled
with
hardware.
We
haven't
had
those
discussions
yet.
G
We
sort
of
need
the
community
that,
like
yourselves,
so
that
we
can
start
those
discussions.
C
So
the
early
assignment
stuff,
we'll
discuss
between
the
chairs
and
then
send
the
appropriate
mail.
C
So
now
it
becomes
interesting.
This
is
my
hash.
C
N
Okay,
all
right,
sorry
for
that
snafu,
I'm
in
a
public
place,
so
do
not
mind
the
noise
behind
me.
I'm
here
to
give
an
update
on
the
tcp
yang
model.
N
A
list
of
tcp
connections
and
a
newly
added
list
of
listeners
for
that
was
added
in
the
recent
version
that
definition
of
that
listener
list
was
established
by
what
is
existing
in
tcp
map.
N
The
third
item:
there
is
the
modifications
to
support
tcpao
and
md5.
Previously
we
were
trying
to
implement
another
by
not
augmenting
the
keychain
model
and
that
didn't
go
down
very
well.
So
the
recent
changes
now
add
the
augmentation
of
the
keychain
model
to
support
tcpao
and
md5.
N
N
N
Now,
if
I
can
only
move
to
the
next
slide,
all
right,
so
here's
the
overview
of
the
isc
feedback
we
have
got.
As
I
said,
three
discusses
and
several
comments.
A
lot
of
them
are
have
been
addressed
in
a
working
copy
that
you
can
see
the
status
on
the
right
side.
N
N
The
the
other
was
the
the
new
edition
of
to
support
tcpa,
0
and
md5,
and
the
corresponding
examples
that
we
have
in
the
draft.
N
There
were
some
inconsistencies
in
that
example
that
we
have
tried
to
now
address
and
then
some
inconsistency
in
the
2019
language
we
have
also
addressed
there
is
no.
There
was
a
suggestion
to
remove
md5
support,
which
the
authors
are
not
planning
to,
because
we
do
have
a
requirement
from
the
bgp
yang
model
for
support
of
md5.
N
So
as
soon
as
the
encoding
of
the
any
listener
list
is
concerned,
it's
currently
encoded
as
a
union
or
that
you
see
the
definition
of
which
you
see
below,
and
this
definition
is
comes
from
the
tcp
myth
and
I
think
the
question
or
the
discuss
comment
that
we
received
was
how
is
the
ipv4
address
of
all
zeros
or
the
ipv6
address
interpreted
by
this
particular
union
and
we'll
the
yang
1.1
language
essentially
says
that
if
in
a
union
you
have
two
type
definitions,
a
you're
supposed
to
parse
the
types
in
that
particular
order.
N
The
final
slide,
as
I
mentioned,
the
ao
configuration
examples.
There
were
a
couple
of
suggestions
on
how
to
make
the
containers
for
ao
and
md5
presence
containers
which
we
have
adopted
in
the
draft.
It's
a
minor
change
doesn't
fundamentally
change
the
model
in
any
way.
N
What
is
new
and
and
required
edition
is
that
because
tcpao
supports
aes
128
and
the
example
in
the
draft
did
not
reflect
the
fact
that
the
crypto
type
that
a
ao
supports
only
120
aes
128.
N
C
I
only
have
one
which
means
you
said
you
changed
the
examples.
N
Yeah
so
as
part
of
the
draft
we
do
run,
it
run
the
example
against
the
model
and
we
use
yang
lind
to
do
the
verification
that
the
example
does
correspond
to
the
okay.
C
M
O
Hello,
my
name
is
gion
mishra
with
verizon
and
I'm
I
will
be
presenting
the
next
gen
tcp
yang
model,
a
discussion
that
came
up
through
the
hopster
review
recently
next
slide.
O
So
here's
some
motivation
and
some
history
related
to
the
nextgen,
tcp
yang
model.
So
during
the
opsec
review
of
the
tcp
ganging
model
that
was
just
discussed
the
as
a
result.
I
we
started
looking
at
a
possible
next-gen,
tcp
yang
model
and
would
like
to
get
feedback
from
the
tcpm
working
group
as
well
on
this
on
on
thoughts
regarding
this
process
and
if
it's
something
that's
feasible.
So
as
a
result,
we
discussed
the
yang
motto
and
what
would
actually
go
into
it
on
on
the
mailing
list.
O
So
yang
is
about
visibility
similar
to
the
snmp
mib
and
not
remote
management.
Just
some
discussion
that
we've
had
that
the
current
yang
kind
of
really
mirrors
the
snmp
mib
and
from
the
routing
area,
grouping
kind
of
what
we're
interested
related
to
the
next
ntcp
management.
I
mean
sorry
tcp
yang
model,
so
it's
not
really
necessarily
remote
management.
O
But
what
we
would
like
to
be
able
to
do
is
observe
the
tcp
parameters
related
to
the
tcp
session
state
and
telemetry,
either
back
to
a
controller
or
a
netcof
netcom,
slash
yang,
just
being
able
to
pull
our
statistics.
O
We
would
like
to
see.
We
would
like
to
be
able
to
see
everything
that
could
be
seen
if
you're
looking
at,
like
a
local
like
an
os
hook
into
the
kernel,
just
visibility
into
a
tcp
parameters
related
to
the
connection
state
next
slide.
O
O
One
of
them
is
related
to
data
centers,
massively
scalable
data
centers
and
which,
which
is
bgp
only
now
with
rfc
7938,
and
that's
something
that
a
lot
of
operators
are
looking
towards
so
kind
of
really
the
visibility
and
the
need
for
stability
with
bgp
and
being
able
to
have
monitoring
capabilities
related
to
bgp
is
really
really
important
for
operators
next
slide.
O
So
this
is
a
just
a
use
case
that
came
up
it's
it's.
It's
come
up
a
few
times
in
the
idr
working
group
and
it's
related
to
internet
outages
that
have
that
have
occurred
with
the
tcp
window
going
into
collapsing
and
having
a
zero
window
resulting
in
a
stuck
state,
and
I
just
put
in
there
the
mail
mailing
list
archives
related
to
that
discussion.
C
O
The
second
use
case
is
related
to
compute
nodes
and
a
tcp
session,
just
being
able
to
monitor
statistics
related
to
the
session
state
and
then
windowing
and
the
window
scaling
and
mostly
for
throughput
and
application
traffic.
I
guess
server
to
server
or
client
server.
I
guess
response
time
next
slide.
O
O
O
Next
slide,
and
so
I'd
like
to
get
feedback
from
the
tcp
working
group
just
thoughts
related
to
this.
Thank
you.
G
Michael,
did
you
just
say
bob,
I
think
I
read
your
lips
but
you're
muted.
I
think.
C
G
My
lips
correctly
just
just
to
point
out
all
tcp
options
is
probably
excessive,
because
probably
I
don't
know
how
we
make
a
list
of
those
that
are
actually
used,
but
there's
a
lot
that
aren't
so.
O
I
understood
maybe
parsing
through,
because
I'm
not
really
you
know.
If
I
I
don't
know
what
the
how
large
the
list
is,
but
probably
parsing
the
list
and
see
what
would
be
pertinent,
I
guess
would
make
sense
because
I'm
sure,
if
it's
a
lengthy,
lengthy,
probably
doesn't
make
sense
every
single
one,
but
just
finding
the
ones
that
are
pertinent.
That
would
make
sense.
F
Martin,
duke
google,
can
you
go
back
to
the
use
case?
One
sure.
F
So
I'm
a
little
confused
by
example,
so
you
have
keep
going
michael.
Let's
it's
just.
F
That
one
okay,
so
in
this
case,
like
from
a
to
b,
the
both
the
sender
received
windows
are
nonzero,
so
data
can
be
sent
and
then
in
in
b
to
a
it's
like,
both
sides
are
deadlocked
right
right.
So.
F
O
Yes,
so
what
what
ended
up
happening?
This
stage
of
the
router?
A
he's?
A
he's
not
able
to?
I
guess,
he's
he's
the
one
that
kind
of
has
its
management
plane.
It's
kind
of.
It's
hung.
Okay,.
D
O
His
since
the
management
plan
uses
tcp
as
well
as
bgp,
using
tcp
he's
not
able
to
like
to
write
to
the
I
guess
the
us
is,
I
believe
it's
the
receive
window
so
he's
not
able
to
write
to
his
buffer,
and
so
he
is
not
able
to
process.
I
guess
anything
like
if
you,
even
if
he
gets
the
message
he
had
what
ends
up
happening.
He
just
doesn't
close
the
session.
I
think
that's
the
thing
he's
not
aware
of
the
other
end.
O
So
that's
that's
where
the
I
think
in
like,
if
you
it,
you
know,
like
a
client
server
application
where
you
have.
You
know
tcp
zero
window,
which
happens,
and
then
the
window
opens
up
in
this
particular
case,
because
it's
router
to
router
in
this.
For
when,
when
that
happens,
that
congested
state
happens.
It's
like
the
management
plane
is
just
completely
hung,
but
then
the
bgp
just
kind
of
because
the
managing
plane
is
hung
and
it's
using
tcp.
O
These
state
of
the
connection
ends
up
remains
remaining
active
and,
unfortunately,
bgp
doesn't
reroute.
Okay,.
O
D
O
Hard
for
the
not
to
realize
that
it's
it
sees
routes
and
everything
seems
like
normal.
F
F
Or
because
I'm
interesting
like
if
you're
just
using
linux
tcp,
I
don't
know
you
can
install,
I
mean
I
doubt
they're
going
to
implement
the
yang
and
so
there's
no
market
for
this.
If
that's,
if
that's
the
case
yeah
I
mean
I'm
a
little,
I'm
a
little
like.
F
It
seems
like
you're
not
like
tuning
since,
since
by
since
you
stated,
you
were
not
tuning
these
parameters
right,
I
I
do.
It
seems
like
your
actual
option.
You're
just
like
reset
connections,
reboot
stuff
right
and
like
I'm
wondering
if
a
much
simpler
like
I
am
dead,
locked
model
is
much
simpler
than
trying
to
like
have
like
dozens
and
dozens
of
parameters,
you're
specifying
and
reporting
and
like
having
complex
logic
for
what
means
what
just
having
looked
at
this
and
thought
about
it
for
10
minutes,
but
thanks.
C
C
So
if
the
application
is
stuck
from
from
from
a
tcp
point
of
view,
that's
fine!
So
normally
what
you
do
there
is
application
layer,
heart,
beating
or
application
layer
test
messages,
and
then
you
figure
out
that
your
peer
application.
You
can't
talk
to
your
peer
application
layer
anymore,
but
on
the
on
these
points,
I'm
trying
to
so
I
I
don't
know
much
about
yang.
So
that's
why
I
want
to
figure
out
what
you,
what
you
are
looking
for.
C
D
C
O
You
know
as
far
as
the
tcp
parameters,
I
think,
as
I
think
someone
else
had
asked
that
question,
but
I
think
really
maybe
the
pertinent
parameters,
I
don't
think
we'd,
probably
need
all
of
all
of
the
parameters,
but
I
think,
what's
pertinent
to
the
connection
state
that
actually
would
help
us
in
determining
whether
a
problem,
I
guess
for
like
the
knock.
I
guess
if,
if
there's
a
problem
and
maybe
parsing
through
the
parameters-
and
there
may
be
some
key
parameters,
that
would
be
helpful.
I
guess.
D
C
C
Q
Jeffrey
haase,
juniper
networks,
bgb
developer
yang
no
author,
so
you
gave
me
a
giant
list
to
try
to
work
through.
I
wish
I
could
have
hit
them
one
at
a
time.
Sorry,
so
to
guyan's
point
the
stuff:
that's
in
here
state-wise
is
all
appropriate.
Q
Your
point
about
we're
not
trying
to
implement
t-speed
dump
on
top
of
yang.
That
does
not
make
sense
so
being
able
to
look
at
the
tcp
flags
modeling,
all
those
that's
not
a
big
deal
for
the
session.
If
you
think
about
most
tcp,
you
know
stacks
you're,
setting
these
things
as
socket
options
and
that's
an
appropriate
thing
to
see
as
part
of
socket
state
as
an
example.
There
may
be
appropriate
things
like
you
know.
If
you've
seen
an
expected
tcp
option
on
the
session,
you
could
also
record
that
and
basically
say
so.
Q
Q
Sometimes
you
have.
You
know
drop
packets
at
inopportune
times
where
each
side
sort
of
thinks
that's
trying
to
get
out
of
the
state
and
it
just
never
gets
there
for
some
reason.
Sometimes
you
have
that
due
to
specific
types
of
network
drop,
sometimes
authentication
can
cause
that
in
certain
circumstances,
but
in
the
vast
majority
of
cases
these
things
are
either
bugs
or
other
unusual
circumstances
that
the
whole
issue
is
this
session
is
wedged.
Q
The
client
you
know
bgp
is
the
example
case,
but
this
can
happen
for
other
things.
It's
sort
of
stuck
waiting
to
get
out
of
the
circumstance,
and
you
know
if
you're,
not
implementing
a
form
of
protocol
go
keep
alive
or
hold
timer.
That
expects
to
get
out
of
the
situation
because,
as
far
as
each
side
can
tell
you
have
you
know,
data
pending
you're
waiting
to
actually
move
along.
Q
So
the
challenge
comes,
you
know
when
you
get
into
these
stuck
situations,
how
do
you
troubleshoot
them?
And
if
you're
on
the
box
you're
going
to
sit
down
type
netstat
and
see
what's
going
on,
what's
needed
for
operators
that
are
trying
to
troubleshoot
applications
like
bgp
that
are
used
in
routers
to
be
able
to
troubleshoot
the
situation,
you
know
remotely
is
being
able
to
simply
get
the
status
of
the
session
see
what's
going
on,
and
you
know
there
may
be
opportunities
for
some
level
of
telemetry.
Q
For
these
stuck
situations
like
zero
windowing
is
a
common
thing.
You
don't
want
to
generate
a
gang
notification
or
trap
out
of
this
sort
of
thing.
Just
like
you
wouldn't
want
to
do
an
snmp,
but
it's
very
appropriate
that
if
you're
monitoring
these
things-
and
you
see
that
a
bgp
session
sort
of
gotten
stuck
being
able
to
query-
you
know
via
netcave,
you
know
what
is
the
status
of
the
socket.
You
know
it's
been
zero
windowed
and
if
I
see
that's
been
that
way
for
you
know
a
minute,
you
can
then
take
action.
Q
You
like
resetting
the
session.
You
know
using
your
new
router
protocol,
so
there's
a
lot
of
options
for
things
that
you
can
do
here.
But
most
of
this
is
the
same
type
of
visibility,
you'd
get
via
cli
and
just
simply,
you
know
putting
into
the
management
plane
in
a
generic
fashion
and
there's
a
throwaway
comment
about.
You
know
modern
stacks.
You
know
we
run
bsd
drive
stacks,
we
run,
linux
drive
stacks
and
we
layer
stuff
on
top
of
all
that
for
management.
So
this
is
not
an
unusual.
E
N
Sorry
so
I
want
to
generally
reflect,
I
think,
the
comments
that
both
jeff
and
michael
provided,
both
as
author
of
the
bgp
yang
model
and
the
tcp
yang
model
that
I
think
trying
to
keep
track
of
state
information,
or
at
least
the
live
state
information
doesn't
probably
doesn't
make
sense
as
a
replacement
for
tcp
dump,
but
definitely
for
conditions.
O
That
would
be
perfect.
I
mean
that's
really
as
jeff
described.
That's
exactly
what
we're
looking
for
somehow
that
we
can
get
an
alert.
You
know
when
that
stuck
state
happens
with
the
zero.
You
know
when
that
zero
window
happens
and
being
able
to
you
know,
get
a
report.
You
know
you
know
flagged.
I
guess
to
the
knock
and
then
being
able
to.
You
know,
act
on
it
as
soon
as
the
stuck
stage
like.
If
it's
stuck
for
a
period
of
time,
then
we
can
act
on
it
and
reset
the
session.
F
Martin,
duke
google,
again
so,
like
you
know,
obviously
the
bg
community
bgp
community
uses
yang,
and
so
I
I'm
you
know
we're
doing
getting
work
now
for
them
and
I
think
doing
more
work
for
them
is
fine.
I
just
think.
F
I
think
the
principle
we're
trying
to
apply
to
our
yang
work
is
to
not
not
try
to
do
all
of
tcp,
because
the
mib
experience
was
a
terrible
one
and
to
be
very,
very
deliberate
in
adding
stuff
to
that.
So,
like
I
mean
if
this
work,
I
don't.
I
fundamentally
object
to
this
work
progressing,
but
I
would
like
to
like.
F
I
would
like
the
the
proponents
to
think
hard
about
what
kind
of
information
would
be
actionable
and
whether
you
need
like
really
fine-grained
stuff,
like
window
sizes
or
like
where,
like
booleans,
could
be
used,
there's
or
like
if
there's
a
bunch
of
stuff
where
the
answer
is
reboot.
The
box
like
just
have
like
a,
I
need
to
be
rebooted
indicator,
or
whatever
I
mean
you
know,
I'm
off
top
of
my
head.
F
I'm
probably
not
saying
that
right,
but
to
not
have
like
try
to
have
a
very
like
small
yang
model,
if
possible,
that
covers
what
you
really
need
for
this
use
for
these
use
cases
and
that
you
know
that
relates
to
actionable
stuff,
and
that
will
just
make
this
a
much
more
practical
thing
to
get
through
the
process.
Q
Jeffrey
hans,
following
up,
I
don't
disagree
with
you.
You
want
to
keep
the
model
as
small
as
reasonable.
Yang
has
the
property
that
it
can
be
extended,
especially
no
proprietary,
in
a
very
easy
manner,
offering
advice
to
this
working
group
having
done
lobbying
stuff
for
itf
at
the
moment,
you
have
a
choice
in
front
of
you.
E
O
E
C
C
P
Okay,
okay,
so
hello,
everyone,
my
name
is
carlos
gomez,
I'm
going
to
present
the
updated
version
of
the
draft
entitled
tcp
a
great
request,
star
option.
My
co-author
is
john
krugrad
from
the
university
of
cambridge.
P
So,
first
of
all,
let's
take
a
look
at
the
motivation
for
this
draft
delay.
Tax
is
a
widely
used
mechanism
which
is
intended
to
reduce
protocol
overhead.
However,
it
may
also
contribute
to
suboptimal
performance
in
some
cases,
for
example,
in
so-called
large
congestion
window
scenarios,
meaning
congestion
window
size
much
greater
than
the
mss,
where
saving
up
to
one
of
every
2x
may
be
insufficient.
P
For
example,
when
there
are
performance
limitations
due
to
a
symmetric
path
capacity
or
due
when
we
want
to
reduce
further
the
computational
cost
and
network
load,
and
then
there
are
also
so-called
small
congestion
window
scenarios,
that
is
a
congestion
window
size
up
to
the
order
of
one
mss,
for
example,
in
data
centers,
where
the
bandwidth
delay
product
can
be
up
to
the
order
of
one
mss.
In
this
case,
the
latex
will
incur
a
delay
much
greater
than
the
rtt,
and
also
when
there
are
transactional
data
exchanges
or
when
congestion
window
decreases,
then
yeah.
P
So,
since
the
last
idf
we
produced
versions
zero,
four
and
zero
five,
with
the
address
comments
received
in
the
last
idea,
but
also
on
the
mailing
list
and
by
the
way
thanks
a
lot
to
everyone
for
the
useful,
very
useful
feedback
received.
P
P
You
have
the
old
version,
which
is
zero
three
and
also
the
new
format
in
zero
five.
So
there
are
several
things
to
mention
here.
First,
is
that
in
previous
versions
of
the
draft
there
was
a
feature
called
ignore
order.
However,
it
was
not
very
clear
whether
this
was
actually
useful.
We
received
a
significant
amount
of
feedback
in
this
regard,
so
we
decided
to
remove
it
from
the
document.
P
As
you
may
recall,
the
intended
status
for
this
document
is
experimental,
so
we
are
following
rfc
6994,
which
defines
the
shared
use
of
experimental
tcp
options
and,
according
to
that,
rfc,
the
kind
field
can
take
only
two
values:
two
five
three
or
two
five
four.
So
we
have
chosen
the
latter
and
the
last
common,
for
this
light
is
that
well,
it
was
mentioned
on
the
mailing
list
that
the
old
format
having
an
odd
length
in
some
cases,
the
implementations
might
or
implementers
might
want
to
add
some
padding
to
make
the
size,
even
so.
P
P
P
P
Now
we
state
that
the
r
field
carries
the
binary
encoding
of
the
accredit,
then
we
also
state
that
r
equal
to
0
is
a
special
case
where
we
request
the
sender
requests
an
immediate
pack
while
not
modifying
the
default
or
steady
state,
acc
rate,
and
with
this
encoding
now
the
maximum
value
of
r
is
actually
2047,
at
least
in
version
0.5.
P
So
there
has
actually
been
discussion
already
in
the
past
regarding
the
maximum
value
for
this
parameter,
the
request
attack
rate
are
so
again
in
zero.
Five.
It
is
2047.,
this
perhaps
is
for
discussion,
but
well
in
the
past.
There
have
been
questions
on
why
values
greater
than
63
would
be
needed,
so
there
have
already
been
some
answers.
P
So
I
don't
know
if
there
are
any
comments
on
this
maximum
value
of
r.
G
L
L
P
Okay,
so
yeah.
I
understand,
for
example,
some
of
the
reasons
expressed
for
larger
maximum
values
in
that
order
would
be
to
reduce,
for
example,
the
amount
of
acts
that
would
need
to
be
processed
by
the
network,
but
also
I
understand
that
the
concern
for
large
values
of
ours
so
yeah,
I
guess-
and
I'm
wondering
how
to
to
achieve
a
suitable
trade-off
here.
G
Hi
this
is
bob
briscoe.
I've
just
discovered
that
a
male
I
thought
I'd
send.
I
never
sent
so
I'll
say
it
now.
Obviously
I
didn't
click
send
and
it's
sitting
in
my
drafts.
G
G
G
G
G
P
B
Bro
your
thing
is
that
just
personal
opinion,
if
we
want
to
save
the
options
with
just
even
just
one
byte,
maybe
we
can
use
the
previous
version
and
if
we
need
big
value
and
we
can
utilize
an
elizabethan
bit
and
then
arrogate
new
formats-
something
like
that,
then
maybe
we
can
address
your
concern.
P
Okay,
I
see
that
there's
some
pressure
on
one
hand
to
to
maybe
have
an
even
length
for
the
option,
but
then
also
we
want
to
to
keep
the
format
short,
so
yeah,
let's
see
if
we
can
find
a
suitable
solution.
C
So
that's,
why
are
we
really
saving
a
byte
if
we
have
one
option
with
with
a
knot
length?
C
P
C
I
So
while
while
we
are
generally
doing
padding
padding
is
something
that
we
could
really
do
away
with,
I
would
really
more
like
to
see
an
argument
made
that
a
larger
field,
length
or
option
length
is
really
valuable
and,
as
gorge
has
pointed
out,
even
a
value
of
100
is
already
quite
excessive,
at
least
at
this
age
having
having
a
transport
protocol
flying
without
feedback
for
thousands
of
a
flight
of
thousands
of
packets,
it,
I
can
hardly
imagine
a
transport
protocol
that
would,
you
know,
have
a
decent,
decent
properties
with
such
a
rare
feedback.
I
On
the
other
hand,
I
mean
this
is
advisory
and
not
a
mandatory
option,
so
the
receiver
can
use
it
can
delay,
but
it's
not
that
it
has
to
follow
that.
So
therefore
it's
it's
that's
the
other
thing,
the
other
aspect
why
I
don't
think
that
we
need
really
that
lengthy
option.
Thank
you.
C
Regarding
the
regarding
the
option,
the
concatenation
of
all
tcp
options
has
to
be
length
divisible
by
four.
So
that's
that's
the
padding
I
was
referring
to
I
by
the
way
I
closed
the
queue
since
we
are
at
the
end
of
the
time
and.
A
J
A
We
have
the
rule
of
thumb
of
having
4x
per
rtt,
because
that
gives
sufficiently
rapid
feedback
to
suit
the
network
path.
It's
also
worth
remembering
that
if
a
congestion
event
occurs,
that
will
result
in
immediate
feedback,
regardless
of
the
value
of
the
delayed
act.
Timer.
C
Cards,
do
we
have
do
you
want
to
have
show
more
slides,
or
are
you
basically
done.
P
Yeah,
so
just
mentioning
that
yeah
we
have
running
code.
So
this
is
great
news
and,
as
michael
announced
on
the
mailing
list,
he
has
been
leading
the
development
of
a
prototype
implementation
of
the
draft
for
freebsd.
My
understanding
is
that
there
are
some
features
already
supported
and
it
will
be
completed
in
the
next
few
months.