►
From YouTube: IETF113-TCPM-20220323-1330
Description
TCPM meeting session at IETF113
2022/03/23 1330
https://datatracker.ietf.org/meeting/113/proceedings/
A
A
A
It's
almost
time,
but
let's
see
okay
number
of
people
is
getting
increased.
B
A
A
Hey
hello,
everyone
I
hear
some
items
somewhere.
A
Okay,
hello,
everyone.
I
think
I
can
hear
you
you
can
hear
me
so
this
is
our
dcpm
working
group
meeting,
tcp
maintenance
under
my
neck
station
working
group,
and
this
working
group
has
three
co-chairs.
My
name
is
yoshi
nishida
and
we
have
ian
here
and
michael
here,
everyone
in
remote
and
just
in
case
this
session
is
being
recorded,
so
it
will
be
published
eventually
so
here
in
the
remote
make
sure
you
have
already
cleaned
the
room,
otherwise
it
will
be
published.
A
Okay-
and
this
is,
I
use
a
node.
Well,
I
think
you
have
already
seen
this
several
times
already.
So
in
a
nutshell,
this
basically
describes
how
to
participate,
how
to
contribute
to
the
idf
and
which
point
you
should
be
available
and
if
you
have
any
concerns
about
participating
or
contributing
the
idea.
Please
read
this
slide
very
carefully
and
then,
if
you,
you
know,
you
can
find
the
same
content
on
the
idf
webpage.
You
can
search
and
you
can
find
it
very
easily.
A
A
While
you
know
you
unless
you
are
using
on
sidebar
mid
ago,
and
if
you
are
remote,
if
you
remote
participant,
please
make
sure
your
audio
and
the
video
of
undersea
and
ask
unless
you're
speaking
and
then
use
your
headset
as
much
as
possible.
A
A
Moving
beyond
logistics,
so
richard
will
be
in
charge
of
not
taking.
Thank
you
so
much
richard
and
michael
will
take
care
of
javascript.
A
A
And
then
this
is
agenda
for
the
today's
meeting,
so
we
are
starting
from
working
group
status
and
then,
after
that,
we
have
three
presentation
for
working
group
document
at
first
probing
we'll
talk
about,
I
start
draft
and
then
bd
will
talk
about
the
cubic
draft.
After
that,
bob
will
talk
about
acura,
dcn
updates
and
after
working
group
documents
presentation,
we
have
three
presentation
for
non-working
group
document.
A
So,
as
you
might
already
notice,
there
are
some
personal
updates
in
the
working
group
and
very
unfortunately,
michael
decided
to
step
down
as
a
chair,
but
we
really
really
appreciate
his
long-term
contribution
dedication
to
the
idea
and
to
this
working
group.
A
I
really
really
appreciate
it.
I
I
learned
him
a
lot.
He
learned
from
him
a
lot
and
then
we
have
a
new
concierge
ian
and
thank
you
so
much
for
taking
this
role
here.
You
have
some
words.
C
Thank
you,
yeah.
I'm
really
excited,
obviously
I'm
extremely
familiar
with
with
quick
and
to
you
know,
congestion,
control,
work
like
ppr
and
and
such
and
I'm
excited
to
kind
of
get
a
better
visibility
into
the
innovations
in
tcp,
and
you
know
help
wherever
I
can
to
to
make
sure
that
tcp
is
awesome
is
quick.
A
A
This
draft
will
also
be
discussed
in
the
meeting
and
then
we
believe
with
all
three,
this
draft
getting
close
to
be
finalized,
that's
expectation
and
then
generalized
issue
and
draft.
A
This
draft
has
been
on
hold
for
a
while,
and
the
reason
why
it
has
been
on
hold
is
that
this
truck
depends
on
accurate
dcn,
but
we
have
very
long
discussion
on
accuracy
and
draft.
That's
why
you
know
this
general
edition.
Drought
has
been
suspended
for
a
long
time,
but
I
think
the
current
status
of
our
tradition
is,
you
know,
getting
mature,
it's
getting
close
to
be
finalized,
so
I
think
we
can
start
proceeding
generally
each
and
drop
very
soon.
That
is
expectation.
A
And
for
idio
draft
joe
sometimes
updates
the
draft.
Some
sometimes
send
a
message
to
the
mailing
list,
but
the
problem
of
this
draft
is:
we
don't
have
many
feedback
on
this
draft,
so
it's
very
difficult
for
chairs
to
think
about
how
to
proceed
this
draft.
So
I
would
like
to
talk
about
this
traffic
a
little
bit
more
in
my
final
slide.
So
I
will
you
know,
talk
about
this
later
and
then
the
final
one
is
69
37
bits
drafts
and
this
draft
has
been
inactive
for
a
while.
A
But
as
far
as
we
exchange
email
with
the
authors,
they
also
is
planning
to
publish
new
version
very
soon,
and
so
once
they
have
published
the
new
version,
we
can
think
about
how
to
proceed
the
draft.
So
that's
a
plan.
E
So
this
is
a
richard
just
the
general
observation.
I
don't
think
we
need
to
wait
with
generous
ecn
on
any
of
the
other
stuff,
really
quite
the
opposite,
so
I
would
really
appreciate
it.
If
generalized
ecn
would
proceed,
perhaps
we
can
start
thinking
about
even
going
working
group
last
call
because,
quite
frankly,
there's
no,
I
don't
see
any
technical
discussion
going
on
around
that,
while
there
is
also
technical
merit
to
be
had
from
generalized
ecm.
A
Okay,
that's
fine!
Okay!
Then
richard
now
you
can
initiate
mail,
send
the
email
to
the
mailing
list
and
the
initial
discussion,
and
then
we
can
think
about
you
know.
If
we
can
it's
ready
for
working
lupus
core
or
not.
F
Yeah,
that
would
be
fine.
It
has
a
normative
reference
to
equity
again,
so
it
could
go
through
the
isg.
F
Waiting
director
eastern,
if
it
went
first,
but
that
might
be
better
to
get
it
out
of
the
way,
rather
than
really
admit.
G
H
A
F
It
for
this
okay,
I
I
just
said:
there's
a
normative
reference
from
in
generalized
vcn,
so.
F
Would
be
fine,
you
know
at
least
it
would
be
out
of
the
working
group,
but
it
and
that
that
okay.
A
A
So
this
is
my
final
slide,
so
I
like
to
talk
about
idio
draft
a
little
bit
more
and,
as
I
mentioned
the
problem
of
this
draft
is
we
have
very
little
feedback
on
this
draft.
So
what
I'm
thinking
right
now
is
you
know
we
can.
You
know,
have
some
kind
of
specific
reviewer
for
this
draft
and
then,
if
we
can
get
very
detailed
feedback
on
this
trust
from
the
reviewers,
we
can
have
a
very
good
idea
about
how
to
proceed
draft.
A
A
Okay,
yeah,
if
not
no
after
some
discussion
among
cheers,
we
might
contact
some
of
you
asked
for
asking
you
know
reviewing,
but
that's
a
plan
and
then
also,
if
you
have
some
implementation
experience
for
idio
draft
or
if
you
are
planned.
If
you
plan
to
implement
idio
draft,
please
the
information
such
kind
of
information
will
also
be
very
useful
for
us
to
think
about
how
to
proceed
this
draft.
A
So
we
appreciate
your
cooperation
and
then
in
the
meantime,
I
also
like
to
think
about
how
to
you
know,
proceed
scene,
option
extension
and
joe
has
a
one
proposal
to
about
slashing
option
space
extension
and
this
program.
The
program
of
this
draft
is
also
we
don't.
We
have
very
little
feedback
on
this
draft
and
then
joe
is
saying
that's
no.
He
is
thinking
about.
You
know
submitting
this
draft
as
an
independent
stream,
but
I'm
not
sure
this
is
a
great
idea.
A
The
one
reason
for
that
is
option.
Extension
is
very
fundamental
for
this
respect.
So
having
this
kind
of
proposal
in
as
in
as
an
independent
stream
is
a
great
idea
or
not,
but
we
still
have
a
don't.
Have
don't
have
a
specific
feedback
on
this
point,
so
we
don't
know
if
this
is
okay
or
if
this
is
not
okay.
Well,
we
should
adapt
it
as
this
is
a
great
idea
so
and
then
we
also
would
like
to
think
about
how
to
extend
option
space
in
general,
not
only
specifically
those
proposals.
A
A
I
Yeah,
I
would
say
the
options
extension
in
general
and
the
sin
especially,
is
something
I've
been
interested
in
for
a
long
time,
and
ideally,
I
think
the
working
group
would
handle
this
and
arrive
at
a
consensus
solution.
I
think
the
independent
stream
path
is
a
last
resort.
If
the
working
group
doesn't
have
the
interest
or
energy
to
do
anything.
A
J
Hi,
martin
duke
google
yeah,
I
agree
with
wes.
It
would
be
nice
to
get
this
done
and
if
it's
a
question
of
bandwidth
it
just
people
want
to
review
it.
But
there's
too
many
other
things
in
tcpm
right
now,
then
maybe
we
can
make
a
commitment
to
like
I
don't
know,
adopt
it
and
then
clear
some
space
that
people
have
the
time
or
if
people
just
aren't
interested.
That's
it.
That's,
maybe
a
different
issue.
So
I'm
not
sure
what
the
nature
of
the
of
the
reluctance
is.
E
So
this
is
so
this
is
richard,
so
I
so
from
my
ex
observations.
Over
the
last
couple
of
years,
we've
had
a
couple
of
years
ago.
We
had
this
push
to
get
more
option
space
and
it
was
quite
a
high
high
on
the
agenda.
But
since
then
we
didn't
really
find
anything
that
had
an
immediate
need
for
extended
options
right
away.
E
However,
I
would
support
what
martin
just
said.
I
think
we
should
discuss
adopting
this
and
having
it
on
the
on
the
working
groups
agenda,
because
eventually
it
will
be
coming
up
again
and
then
it
will
be
more
urgent
than
ever,
probably
at
least
in
the
beginning,
at
least
at
that
time
I
would
expect
people
will
have
some
bandwidth
to
contribute
and
review.
E
I
think
I
think
we
should
we
should
adopt.
I
think
we
should
adopt
this,
but
and
give
it
cons
consideration
just
as
much
imagination
and
to
see
that
we
can
work
on
this,
even
not
at
a
very
high
priority
right
now,
but
eventually
it
should
be
a
working
group
document.
J
So
I
do
want
to
clarify
that
I
was
making
a
question,
not
a
statement
like
if
the
if
the
lack
of
reviews
is
is
like
indifference
or
bandwidth,
and
if
the
question
is
bandwidth,
then
I
think
we
can
make
time
for
this
and
just
maybe
reduce
our
document
intake
or
or
whatever,
and
then
put
it
as
like.
The
last
milestone
they're
the
furthest
up
milestone,
whereas
if
people
just
like
don't
care,
then
then
we
shouldn't
string
along
joe
thanks.
K
Okay,
great
hello,
everyone
I'm
here
to
talk
about
high
start
plus
plus,
so
the
current.
K
Version
four,
so
this
is
a
recap
of
the
original
idea
when
we
first
brought
this
to
the
tcpm
working
group.
K
So
the
problem
we're
trying
to
solve
is
the
slow
start,
overshoot
problem
so
because
we
effectively
double
the
window
every
round
of
time
we
can
overshoot
the
ideal
send
rate
and
that
causes
massive
packet
losses,
which
means
you
know
more
time
spent
in
in
recovery
as
well
as
increased
packet
retransmissions,
and
sometimes
we
end
up
taking
rto,
even
with
algorithms,
like
rack,
so
hyzer,
plus
plus
originally,
when
we
proposed
it
was
a
very
simple
modification
and
the
idea
was
there
to
use
delay
inquiries
to
detect
that
the
the
queues
are
filling
up
and
we
basically
exit
slow
start
early
before
causing
the
packet
loss.
K
K
Was
due
to
jitter
or
a
temporary
burst
of
packets,
we
basically
introduced
usage
of
limited,
slow
start
so
effectively.
We
would
use
the
max
of
limited,
slow
start
and
the
condition
avoidance
window,
thereby
mitigating
some
of
the
impact
of
experience,
exit.
K
At
that
point,
we
got
some
feedback
and
we
started
looking
into
jitter
resiliency
and
we
ended
up
simplifying
the
algorithm
a
lot.
So,
instead
of
trying
to
compensate
for
early
exit
from
slow
start,
we
added
detection
of
spurious
exits
to
be
able
to
resume
slow
start.
So
imagine
a
network
where
there
is
a
lot
of
jitter
a
delay
increase
algorithm
would
kick
in
sometimes
spuriously
and
cause
us
to
exit
slow
start.
At
that
point
we
enter
something
called
as
the
conservative
slow
start
phase.
K
So
what
is
conservative
slow
start
is
basically
a
series
of
rounds
where
we
are
trying
to
determine
if
the
exit
was
spurious
due
to
a
delay
spike
and
the
way
we
do.
That
is
we
when
we
enter
css.
We
capture
the
the
current
rounds
main
rdt
at
that
point
and
then
for
a
series
of
rounds.
We
see
if
any
of
the
delay
samples
are
going
to
be
lower
than
the
captured
rdt.
K
That
tells
us
whether
the
delay
increase
was
spurious
or
not,
and
if
we
think
that
the,
if,
if
we
determine
that
the
exit
was
indeed
spurious,
we
basically
resume
high
start
plus
plus
so,
which
means
you
know,
regular
slow
start
resumes
as
well
as
the
direction
of
delay
increase
and
in
the
entire
algorithm
is
basically
reset
and,
and
it
restarts,
and
the
advantage
of
this
is
that
if
the
network
does
have
jitter
that
you
know
slow
start
will
still
give
you.
You
know
good
performance
and
an
exponential
window
increase.
K
So
that's
the
gist
of
it
that
you
know.
Instead
of
trying
to
compensate
for
early
exit,
we
are
detecting
spurious
exists
and
we
are
resuming
slow
start.
K
Sorry
for
the
small
font
size
here,
but
this
is
the
summary
of
the
current
algorithm
in
draft
version
4..
So
basically
in
slow
start,
we
we
adjust
the
condition
window
according
to
five
six,
eight
one,
but
we
take
rtt
samples.
So
each
round
is
approximately
an
rtt
and
we
remember
the
last
round's
minority
and
we
compute
the
current
round's
minority
and
we
use
a
threshold,
basically
there's
a
lower
clamp
and
an
upper
clamp.
K
But
basically,
if
the
current
rounds,
minority
is
greater
than
the
last
round's
minority
plus
the
threshold,
then
we
exit
slow
start.
We
also
capture
something
called
a
css
baseline
rtd.
So
that's
the
rgd
I
was
talking
about
that
will
help
us
determine
if
the
exit
was
spurious
or
not,
and
then
the
conservative
slow
start
phase
lasts
a
few
rounds.
Currently,
the
draft
recommends
five
rounds
based
on
experimentation
and
then
for
each
ack
in
css.
K
K
D
D
K
The
exit
was
actually
spurious
and
we
just
resume
slow
start
and
high
start
plus
plus,
but
if
we
complete
all
of
css,
if
we
complete
all
those
rounds-
and
we
don't
see
the
minority
go
lower
and
then
our
captured
rtt,
then
we
basically
enter
conditional
maintenance.
So
that
is
actually
a
correct
exit
and
there
is
a
consistent
delay
increase.
K
K
So
we
did
make
a
few
changes
in
in
draft.
I
want
to
bring
up
a
couple
of
things
that
were
brought
up
in
the
mailing
list.
Randall
had
proposed
that
instead
of
you
know,
setting
css
baseline
rtt
to
the
current
rounds
minority
that
it
instead
be
set
to
last
one
minority
plus
rtt
thresh.
We
did
experiment
with
the
idea,
but
it
showed
poor
performance
when
jitter
was
present.
K
So
I
think
it
was
an
optimization
for
when
there's
no
jitter
but
in
in
presence
of
jeter,
it
did
not
work
well,
and
I
think
neil
had
suggested
that
we
use
a
different
mechanism
for
determining
window
and
basically,
instead
of
using
greater
than
or
equal
to
you
strictly
greater
than
I
think
his
comments
were
going
towards.
You
know
cases
where
we
could
be
app
limited,
but
we
found
that
this
is
also
inaccurate,
computation
of
an
end
of
round
in
many
cases.
So
we
did
not
make
this
change.
K
So
the
only
sort
of
logic
change
that
we
made
in
draft
zero
four
was
removing
dependency
on
low
ss
thresh.
So
basically,
at
this
point,
the
only
trigger
for
computing
and
measuring
the
rtt
values
and
looking
at
delay
increases
basically
enough.
Number
of
rtt
samples
have
been
taken
in
the
round
which
currently.
O
B
K
So
at
this
point
we
have
addressed
all
the
outstanding
reviews.
I
think
we
got
one
from
bob
jeremy
and
neil
so
far,
and
we
answered
questions
on
the
mailing
list
as
well.
That
came
in
from
randall
and
neil
and
to
our
awareness
there
are
at
least
three
implementations
in
in
production
use.
The
windows
tcp
cubic
implementation
has
been
using
this
for
more
than
two
years.
I
think,
maybe
three
or
more
than
three
years
at
this
point,
cloudflare's
quick
library,
quiche
uses
it
in.
K
Freebsd
has
also
added
this
to
their
tcp
cubic
implementation,
so
in
the
author's
eyes,
I
think
the
draft
is
ready
for
working
group
last
call
and
I'm
happy
to
take
any
questions.
P
P
P
K
P
K
So
the
the
latest
versions
of
the
windows
os
both
windows,
10
and
windows-
11-
have
the
latest
algorithm
implemented
and
I
think
the
free
psd,
tcp.
A
So
I
think
you
know
this
stuff
has
been
stable
for
a
long
time
and
then
I
think
you
know
we
are
almost
getting
ready.
Well,
I
can
rascal,
that's
what's
currently
child
thinking,
but
if
you
have
any
opinions
sold
on
this
one-
and
this
is
a
good
chance
for
you
to
speak
up
any
comments.
A
A
K
D
Sorry
so
I
always
forget
the
slide
request.
I
think
you.
D
Okay,
could
you
tell
me
where
was
it
on
the
left
or
the
right
has
to
okay
got
it?
Okay,.
D
Sure
all
right
can
you
see
the
slice
now
awesome
so
today,
I'll
talk
a
little
bit
about.
You
know
the
changes
that
have
been
made
and
then
the
discussion
that
was
happening
on
the
mailing
list
to
publish
cubic
base
as
proposed
standard
or
or
something
else.
D
So
I
have
captured
some
of
the
updates
in
these
slides,
but
there
are
many
more.
These
are
just
some
of
the
updates
that
I
thought
would
be
interesting
to
just
combine
in
the
slides,
and
this
includes
everything
that
has
been
changed,
plus
the
ones
that
I
have
not
added
here
so
they're
all
present
on
the
github.
D
I'm
not
gonna
go
through
each
one
of
them
because
we
have
gone
through
these
in
the
past
presentations,
so
I'll
give
just
a
couple
of
minutes
on
each
slides.
If
you
have
any
questions
just
in
case,
if
you
haven't
seen
these
updates
so
far,.
D
So,
since
the
last
meeting
we
made
some
changes
and
these
I
would
like
to
discuss
the
spurious
congestion,
detecting
spurious
congestion
event
and
reacting
to
them
there
there's
now
we
have
categorized
into
them
into
two
events.
One
is
boris
timeouts
and
the
other
is
spurious
loss
detected
by
acknowledgements.
The
first
one
is
kind
of
relevant
for
tcp
and
there's
already
a
standards
track,
rc
that
specifies
the
response
for
restoring
the
congestion
window
and
for
adapting
the
timer
so
that
there's
to
avoid
any
future
spurious
timeouts.
D
So
there's
this
rc
4015
that
specifies
all
of
that,
and
then
we
have
the
spurious
losses
detected
by
acknowledgments.
So
this
is
relevant
for
both
tcp
and
quick
and
in
tcp
you
can
use
timestamps
and
duplicate
sex
to
detect.
These
excuse
me
for
quick.
It's
quite
easy.
You
can
detect
it
by
seeing
any
new
packets.
Sorry,
sorry,
you
can
see
this
by
detecting
any
packets
that
were
marked
lost,
but
they
were
acknowledged
after
they
were
marked
lost,
so
that's
kind
of
a
spurious
loss
and
then
for
these
events.
D
Right
now
we
have
suggested
doing
a
undo
of
the
condition
window
of
the
slow
start
threshold
and
a
few
more
parameters
for
cubic
so
and
this
this
one
is
kind
of
a
has
a
precaution,
caution
written
in
the
text
to
be
careful
when
you're
using
this,
because
you
know
it
might
cause
you
to
set
condition
window
to
a
higher
value
if
the
link
capacity
has
changed
from
the
previous
state
to
the
now
state.
D
So
you
just
have
to
be
a
little
careful
with
that,
and
then
we
have
neil
actually
made
the
next
update
about
clarifying
the
meaning
of
application
limited.
There
was
some
question
about
what
is
application,
limited
means,
and
we
have
added
text
for
clarifying
that,
and
then
there
was
a
request
or
question
by
yushi
about.
Why
is
rfc
7661?
D
D
It's
like
it's
doing
how
many
bytes
are
acknowledged
in
rta
in
consecutive
rtts,
one
two
or
three
rounds
so
because
it's
based
on
the
pipe
value,
which
is
why
it's
acknowledged
in
rtt,
it
kind
of
includes
the
validates
the
receive
window,
because
you
always
check
the
receive
window
before
sending
the
data.
So
we
have
added
text
to
make
it
a
little
bit
more
clear
why
it
is
safe
to
use
even
when
this
condition
window
grows
beyond
receive
window.
D
So
these
are
the
latest
updates
in
the
draft-
and
this
is
probably
the
topic
for
main
topic
for
today.
D
P
Yeah
gory
first
I
was
going
to
talk
to
the
previous
slide
and
just
say
I'll
read
the
text
again.
I
think
I
was
happy
with
what
was
done
so
the
7661
thing.
I
think
they
did
the
right
thing
for
this
document,
so
I
don't
see
that
as
an
issue
and
I'll
read
the
text
carefully,
I
promise
and
give
feedback.
If
I
see
anything
so
I
from
my
point
of
view,
let's
look
at
the
last
slide.
A
Q
Michael
just
a
quick
question
about
this
last
statement
here
this
separate
draft
on
cubic
evaluation
improvements
is
that
supposed
to
be
the
home
for
the
things
that
marco
has
brought
up.
That's
supposed.
D
A
L
Hi
yeah
lars
eggert.
I
saw
a
bunch
of
emails
from
marco
on
the
list
that
I
haven't
fully
had
the
time
to
read
yet
but
glancing
at
it.
I'm
not
sure
if
he's
okay
with
publishing
this
sps,
even
if
we
did
this
other
document,
I
would
like
to
hear
that
validated
from
him.
L
Thing
means
that
that
the
ietf
cannot
do
this
before
cubic
undergoes.
All
of
this
you
know
evaluation
and
then,
while
I
sort
of
I
was
instrumental
in
establishing
this
process,
and
so
I
wish
that
deployment
would
have
happened
that
way,
it
didn't
right,
people
deployed
cubic
and
it
worked
well
enough
and
more
people
deployed
cubic,
and
now
it's
basically
ubiquitous
and
I
thought
of
c2
two
ways
forward
right
either
we
can
acknowledge
that
and
say.
L
Well,
you
know
this
didn't
happen
in
the
way
we
ideally
would
have
envisioned
it
to
play
out,
but
the
reality
is
that
it
is
running
the
traffic
on
the
internet
and
therefore
maybe
it
should
be
ps
or
we
can
say
you
know
we're
not
willing
to
do
that.
We're
going
to
stick
with
new
reno,
and
I
personally
obviously
I'm
very
biased
here,
because
I
think
this
should
be
a
published
center.
L
But
I
think
it
would
be
good
to
sort
of
that
seems
to
be
the
fundamental
sort
of
objection
right
that
it's
it
didn't
do
what
we
expect,
specifically
new
congestion
control,
algorithms,
which
is
arguably
isn't
really
either,
but
the
process
that
we
expect
the
stuff
to
undergo,
and
it
didn't
follow
that
right.
I
will
point
out.
P
P
This
is
not
totally
consistent
with
other
rfcs,
but
it
is
running
code.
It
is
out
there.
It
is
the
consensus
of
what
to
do
and
that's
what
this
document
is,
and
if
we
have
a
process
to
doing
that,
then
I
think
we
should
go
that
way.
If
we
don't,
then
I
think
we
need
to
invent
one,
because
we
need
to
get
this
document
published.
P
So
that's
my
own
take
and
that's
just
an
individual
idea.
The
second
thing
is,
I
don't
agree
with
lars
on
the
on
reno,
so
we
can
fight
that
one
out
later.
G
Michael,
so
lars
brought
up
a
statement
that
there
is
a
process
how
to
deal
with
new
congestion
controls
and
so
obviously
cubic
didn't,
follow
that
procedure.
G
Considering
another
congestion,
control
algorithm,
which
is
being
deployed
called
bvr,
is
also
not
following
this
procedure,
so
I'm
I'm
wondering
whether
this
how
how
important
should
it
be
to
follow
this
procedure
and
just
to
answer
martin's
questions
on
the
chat
netflix
uses
neorino.
L
Into
the
queue-
and
I
saw
there's
other
yellow
things
in
front
of
me-
so
I
mean
the
history
of
this
process
right.
This
is
when
I
don't
want
to
call
it
the
high-speed
congestion
control
wars,
but
it
was
basically
we
had
lots.
L
I
think
four
or
five
different
proposals,
many
of
them
from
academia
on
you
know
how
you
can
improve
tcp,
specifically
for
what's
called
high
speed
links
back
then,
which
are
just
normal
links
now,
but
back
then
they
were
high
speed
because
they
were
like
a
gig
fast
or
something
like
that,
and
we
felt
that
on
the
itf
side,
we
needed
to
give
this
some
guidance
and
some
structure,
because
it
was
sort
of
quite
confusing
how
people
went
about
sort
of
motivating
why
their
scheme
was
better
than
somebody
else's
scheme,
and
all
of
that
and
that
sort
of
that's
not
how
congestion
control
is
being
done
or
brought
to
the
idf
anymore.
L
But
we're
now
in
a
situation
where
the
people
who
work
on
new
congestion
control
schemes
actually
are
in
control
of
of
end
points
directly,
which
wasn't
the
case
back
then,
which
is
mostly
academic
work
that
smart
people
did,
but
they
didn't
have
the
the
ability
to
deploy
at
scale
which
we
now
have
right,
and
so
it's
a
very
different
world
now,
15
years
later,
where
we
actually
have
the
ability
to
do
a
b
testing
over
the
real
internet
and
and
get
data
back.
And
all
of
that.
L
A
A
This
is
proposed
standard
and
then
I
also
think
this
is
some
kind
of
aggressiveness
in
the
draft
the
algorithm
described
in
the
draft.
So
I
think
I'm
not.
I
don't
find
a
particular
reason
that
we
are
attacking
cubic
specifically,
so
there
are
several
drafts
and
the
old
draft
should
be
treated
equally
and
then
otherwise.
You
know
it's
not
very
fair,
and
so
I
just
know
if
you
know
we
in
the
space
we
set
very
high,
set
high
bar
on
the
cubic
drug,
I
think
there
we
should
set
all
other
kind
of
transition.
A
K
Yeah,
so
I
tend
to
agree
that
this
should
be
a
proposed
standard
just
because
of
how
widely
it
is
deployed
and.
D
K
If
I
were
writing
a
new
tcp
implementation,
or
even
a
quick
implementation
like
having
cubic
as
the
standard
to
implement,
would
make
sense
because
we
know
know
the
limitations
of
neurano
right,
so
it
doesn't
make
sense
for
new
implementers
new
implementers
to
not
find
this
as
a
as
a
proposed
standard
rc.
That's
my
take
on
it
as
an
implementer.
If
I
was
looking.
K
Right,
I
agree
about
the
part
about
the
process.
I
think
we
should.
We
should
revisit
that
process.
I
think.
D
A
J
Just
very
quickly,
I
I
I
feel
like
we're
getting
maybe
away
from
the
80-12
discussion
a
bit,
but
regarding
like
the
gates
for
congestion
control,
the
bar
that
we
are
setting.
It
feels
like
that
bar
is
now
essentially
high
enough
that
the
correct
way
to
deploy
new
congestion
control
is
to
go
work
for
one
of
those
companies
that
ours
mentioned,
deployed
at
scale
and
then
come
to
idf
and
say
see
it
works,
which,
maybe
is
not.
J
It
is
kind
of
defeating
the
purpose
of
having
this
body
or
a
related
idf
body
like
provide
review
before
things
get
deployed
at
scale.
So
I
think
we
need
to
rethink
it
basically
without
without
making
specific
proposals
thanks.
S
A
A
H
And
your
opinion,
I
think
the
general
sense
is
that
going
going
on
to
propose
standards
is
justified.
A
Okay,
so
this
is
not,
how
can
I
say
finalize
no
number,
but
you
know
to
think
about.
You
know
how
to
proceed
this
route.
I
just
would
like
to
come
the
number
so.
A
A
A
A
A
N
E
D
A
L
Out
of
the
queue
yeah,
I
I
just
wanted
before
so
for
the
author's
team.
Ask
that
if
people-
because
we
had
the
second
working
group,
call
now
right
if
people
are
okay
with
publishing
the
version,
as
is
that
would
be
useful
to
know,
and
if
the
answer
there
is
no,
it
would
be
very
good
to
know
what
exactly
should
be
added
to
the
document.
L
I
know
that
gory
suggested
earlier
that
we
add
a
paragraph
that
said
this
document
didn't
follow
the
process
outlet
in
whatever
the
rfc
was,
but
the
you
know
the
itf
discussed
this
and-
and
I
want
to
say
proof,
but
at
least
acknowledge
that
it's
not
standing
in
the
way
of
this
publication
or
something
like
that.
If
there's
other
things
that
people
want
to
add,
that
would
be
also
good
to
know.
L
I
would
like
to
ask
the
chairs,
for
these
suggested,
edits
to
a
bit
more
actively
asked
whether
there's
consensus
for
this
edition.
Otherwise
we
are
sort
of
again
in
a
situation
where
this
grab
back
of
things
that
people
want
to
add.
We
don't
really
know
what
has
consensus
and
what
hasn't.
Thank
you,
okay,.
J
If
I
recall
the
discussion,
if,
unless
I
missed
something
in
a
discussion,
the
only
person
who
came
to
the
mic
and
said
it
should
be
experimental
was
lars
attempting
to
be
a
sock
puppet
for
for
makuu,
like
you're
kind
of
trying
to
like
express
his
opinion
without
him
being
here.
So
if,
if
either
of
the
one
or
two
people
who
support
experimental
rather
than
proposed
standard,
could
actually
like
say
something
about
it.
I
think
that's
sort
of
customary
at
least
to
give
them
an
opportunity
to
state
their
position.
J
A
A
D
Yeah,
so
I
just
want
to
reply
to
michael
that
we
have
been
making
changes
based
on
marco's
comments
and
writing
those
if
there
are
some
open
issues
where
writing
or
putting
them
in
the
draft
for
the
reader
to
be
aware
of
at
this
point,
it
isn't
clear
exactly
like
if
there
are
suggestive
texts
that
marku
or
anyone
else
would
like
to
add,
would
be
really
happy
to
add
it,
and
that
would
require
some
participation
and
you
know
some
that
commitment
to
make
that
to
to
get
that
done,
and
so
so
you
know
there
are
lots
of
emails
that
marco
has
sent.
D
But
what
we
have
been
asking
again
and
again
is
if
there
is
something
you
have
a
specific
suggestion
that
we
could
add
to
the
draft.
That
would
be
great.
So
just
you
know,
it
would
be
good
to
hear
that.
Q
Okay,
so
I
I
did
not
want
to
make
this
complicated
and
I
haven't
followed
the
last
emails
from
marco
anymore,
but
I
remember
that
there
were
things
about
having
to
update,
probably
3168
about
something
and
and
some
procedural
things
right
that
that
he
had
heavy
disagreement
about
now.
I
don't
remember
if
that
was
all
resolved
or
not,
and
I
mean
okay,
I'm
fine
with
publishing
without
having
that
thing
in.
I
don't
want
to
make
it
complicated.
Q
You
really
don't,
but
if
it's
possible
to
state
why
there
is
a
procedural
problem
here
and
why
that
thing
gets
published.
Nevertheless,
other
than
just
saying:
okay,
it's
published
because
there
is
code
and
is
deployed,
I
mean
if
we
could
be
a
bit
more
elaborate
than
just
something
something
saying
something's
fishy
about
this
rfc,
then
I
think
this
would
be
valuable.
If
we
cannot
I'm
not
going
to
make
a
point
about
that,
I
mean
it's
okay,
I
still
want.
I
think
the
primary
thing
is
that
this
should
be
a
bs.
A
F
A
D
A
F
F
Sorry,
it's
it's
saying:
do
you
really
want
to
share
your
screen?
I've
said
no
and
it's
just
hanging.
Okay.
This
is
going
to
be
a
talk
about
accuracy
and
feedback.
It's
draft
18.
A
A
F
F
Next,
okay,
so
recent
draft
history
there
was
a
long
hiatus
after
july
21,
but
then,
like
all
good
buses,
they
come
in
threes
and
so
february
march
time
we've
had
three
there's
links
there
to
the
summaries
of
the
diffs
in
each
of
them,
which
I'm
not
going
to
go
through
in
these
slides.
But
there
are
spare
slides
at
the
end
that
do
and
thanks
to
ilpo,
vidi,
gory,
richard
and
myself
actually
for
noticing
some
unclear
parts.
F
But
I
will
go
through
most
of
those
contributions
and
also
just
to
mention
that
there
has
been
some
conversation
between
quick
authors
and
the
accuracy
and
authors
about
how
to
deal
with
act
frequency.
F
F
Just
really
news
on
implementation:
there
are
two
main
implementations,
one
in
linux,
one
and
three
bsd,
the
linux.
One
is
reasonably
up
to
date,
but
it's
a
couple
of
draft
versions
out
of
date,
but
there
aren't
that
many
things
to
do
rich
has
been
implementing
in
freebsd
there's
link
to
his
implementation.
There.
F
F
So
this
is
about
the
main
activity.
That's
been
going
on
recently
on
the
list,
based
on
a
review
comments
from
vidi,
asking
about
how
you
deal
with
any
congestion
feedback
coming
in
when
you're
not
sending
using
capable
packets.
F
So
there
are
a
number
of
cases
where
you
might
not
send
incapable
packets
because
there's
a
handshake
at
the
start
that
checks
whether
there's
a
mangling
going
on.
If
there
is
mangling
you,
you
still
remain
in
actual
etn
mode,
but
you
don't
set
ec
and
not
acn
capable
on
your
packets
that
you
send
out.
But
then
video
wondered
well,
even
if
you're
not
sending
ecn
capable
packets.
What,
if
you
get
congestion,
marking
feedback
coming
back
and
there
are
four
possible
three
possible
ways.
F
The
in
all
cases,
you
would
turn
off
sending
ecn
capable
packets,
but
it
there's
a
difference
in
how
we
think
you
should
respond
to
congestion
if
it
comes
in,
as
shown
in
the
right
hand,
column
and
we'd
like
some
discussion,
either
now
or
on
the
list
of
whether
these
the
rationales
for
these
make
sense.
So
the
first
one
is
if,
during
the
handshake,
you
see
some
sort
of
illegal
transition
going
on
like
either
you
send
a
non-easy
packet
and
you
get
get
back
feedback.
F
F
Then
the
you,
you
might
think,
well,
it's
not
going
to
have
any
congestion
response,
but
it
might
if
the
mangling
is
turning
on
ecm
capability
and
then
later
on,
there
is
some
congestion
that
is
doing
normal
congestion
marking.
So
the
idea
idea
there
is
that
you
you
respond
to
congestion.
F
If
you
see
any
feedback,
even
though
you
wouldn't
expect
to
see
it
right
in
the
in
the
b
case,
if
you
see
continuous
congestion
feedback,
then
you
don't
respond
to
it,
especially
if
you're,
if
you're
not
sending
ect,
but
even
if
you
are,
that
probably
implies
there's
mangling
going
on.
That's
that's
asserting
congestion
experienced
all
the
time
and
the
third
case
c.
F
If
you
get
a
zeroed
ace
field,
which
is
this
three
bit
field,
you
shouldn't
see
that
at
the
start,
so
that
probably
means
a
broken
receiver
or
something's
wrong
and
therefore
it's
probably
best
not
to
respond
to
congestion
and
just
if
you
know
just
solely
rely
on
loss,
so
we've
written
that
all
up
in
the
drafts
we
went
through
a
stage
of
having
must
and
must
knots
for
all
those
things,
and
then
it
was
suggested
that
we
should
make
them
all
advisory
rather
than
normative,
because
with
deployment
experience
there
may
be
unexpected
things
going
on
that
we
need
to
deal
with.
F
So
we
didn't
want
to
tie
things
down
too
much
okay,
so
that
was
quite
a
complex
slide,
but
please
respond
on
the
list
about
any
of
that.
Next
slide,
please.
F
So
in
the
process
of
these
latest
edits,
I
did
a
review
of
all
the
lowercase
mustang,
shoulds
and
mays
and
recommended
things
and
changed
them
all
to
needs
to
or
ought
to
or
might
so
that
they
were
not
and
had
no
potential
of
being
red
and
red
ambiguously,
as
potentially
meant
to
be
in
capitals.
F
There
were
just
two
cases
where
I
changed:
lowercase
recommended
to
uppercase
and
I
think
they
were
justified.
One
is
it
said
already
in
the
introduction:
it's
recommended
to
implement
sac
and
ecm,
plus,
plus
with
accurate
ecn
and
made
that
uppercase,
and
similarly
there
was
one
saying
it
strongly
recommended
to
test
the
path
traversal
of
the
accuracy
and
option
and
given
it
already
said
strongly,
I
figured
that
was
quite
reasonable
to
put
normative
there.
But
if
anyone
wants
to
object
to
that,
please
do
so
on
the
list
or
now
at
the
mic.
F
There
are,
as
I
said,
spare
slides
with
all
the
detailed
changes
and
I've
also
posted
each
one
to
the
list.
There
are
only
really
three
technical
changes.
All
the
other
changes
are
just
trying
to
clarify
normative
text.
Usually
the
three
technical
changes
are
all
that
stuff.
F
I
just
mentioned
about
congestion
response,
if
not
sending
ect,
then
there's
a
change
where
we've
allowed
seven
packets
to
go
by
before
you
have
to
send
some
feedback
rather
than
six,
because
the
rap
is
at
eight,
and
we
couldn't
really
think
why
we'd
said
six,
so
we
changed
it
to
seven.
F
We've
just
been
conservative
and
the
third
case
is
a
bit
of
a
fallout
from
when
we
changed
the
tcp
option,
accurate,
ecm
tcp
option,
so
we
had
two
different
tcp
option:
orders.
If
you
remember,
we
went
through
that
episode
of
changing
it,
and
you
noticed
that
we
hadn't
changed
the
initialization
of
the
two
fields
from
how
it
was
originally
when
we
only
had
one
option
or
one
type
of
option,
so
we'll
change
that
okay
next
slide.
F
A
couple
of
upcoming
changes:
this
slide
the
first
one
gauri's
recently
very
recently
been
talking
on
the
list
about
how
he's
not
happy
with
this
accurate
ecn,
draft
updating
or
3449,
which
is
about
network
path,
asymmetry
and
originally
I
thought
it
should
update
it
because
it
said
oh,
three,
three
three
four
four
nine
said
that
act:
filtering
middle
boxes
ought
to
make
sure
that
3168
feedback
works,
and
so
I
thought
well
as
we're
changing
3168
feedback.
F
We
ought
to
update
that
advice
as
well,
but
there
is
an
argument
that
that
advice
in
3449
isn't
normative,
so
maybe
we're
not
updating
it.
So
I
don't
mind
going
with
what
gory
says,
but
maybe
other
people
want
to
chip
in
on
that
point,
but
whatever
we're
going
to
try
and
change
the
text
to
outline
the
problem
and
discuss
possible
ways
forward
without
recommending
any
changes
and
without
updating
3-4-9.
F
Through
the
experience
of
richard's
implementation
and
ill
pose,
found
that
actually
sending
it
accurately
an
option
is
the
easy
part
of
the
code
and
handling
the
receipt
of
it
is
the
difficult
part,
and
so,
whereas
it
currently
says,
if
you
don't
want
to
implement
the
accurate
ecn
option
at
least
consider
implementing,
if
you
don't
want
to
send
it
at
least
consider
receiving
it.
We
want
to
change
that
round
too.
F
If
you
don't
want
to
implement
receiving
it
at
least
consider
sending
it,
and
then
that
means
that
the
other
end,
if
they
implement
it,
at
least
it
will
work
even
if
you're,
ignoring
what
they're
seeing
so
it
gives
a
little
immediate
gratification
of
the.
F
Other
than
dealing
with
the
upcoming
changes,
one
of
which
is
still
working
through
that
filtering
section
with
with
gauri
and
then
writing
text.
For
that,
please
could
the
people
who
asked
for
the
recent
changes
just
confirmed,
they're,
okay
on
the
list
or
not,
and
otherwise
I
think
we're
ready
for
working
group.
Last
call
and,
as
I
mentioned
earlier,
generalized
ecn
depends
on
this
one.
H
Okay,
two
quick
things
previous
slide
number.
Nine.
This
proposed
change
would
be
the
opposite
of
the
interoperability
principle.
I.E
be
conservative
in
what
you
emit
and
generous
in
what
you
accept.
H
The
other
thing
I
would
note-
and
I've
put
this
on
the
list
in
the
past
couple
of
hours-
is
that
I
think
the
accurate
request,
which
I
think
will
be
seen
shortly
or
talked
about
shortly-
offers
an
alternative
way
of
getting
generalized
eats
the
end
on
so
we'll
talk
about
that
in
more
detail
elsewhere,.
F
Okay,
I
can
I
respond
to
both
those
the
first
one.
I
don't
think
this
is
about
liberal
and
in
what
you
receive
or
send
these
two
cases
they're
in
either
case.
One
end
is,
is
not
doing
something
and
the
other
end
is
doing
something.
F
So
I
I
definitely
don't
think
this
contravenes
that
principle.
But
if
you've
said
this
on
the
list,
I
can
respond
there
when
I've
fully
understood
what
you
mean
and
the
other
case.
I
don't
also
see
how
the
act
create
requesting,
makes
generalized
ecn
or
does
it
realize
this
year,
and
it
seems
that
generalization
is
a
different
thing.
It's
trying
to
make
control
packets
he's
incapable,
which
I
don't
see,
how
that
relates
directly
to
the
outbreak,
but.
P
Okay,
thanks
for
the
rit
349
proposal
on
the
list,
I'm
very
happy
with
the
direction.
That's
going,
I'm
sure
the
rest
of
the
documents
pretty
ready.
The
one
thing
I
objected
to
reading,
which
you
did
not
pick
up
on
was
the
strongly
recommend.
I'm
not
sure
our
ad
would
be
that
happy.
Having
a
strongly
recommend.
A
Okay,
so
they'll
still
need
more
updates.
I
think.
Okay,
yes,
let's
continue
the
discussion
a
little
bit
more
and
then
we
can
think
about.
You
know
if
it's
ready
for
working
group,
restaurant
or
not.
B
A
M
Maybe
if
you
can
control
them
yoshi
or
perhaps
I
can
also
share
my
screen
whatever
you
prefer.
C
M
M
Okay?
Thank
you.
My
name
is
carlos
gomez
and
I'm
going
to
present
the
last
update
of
the
draft
entitled
tcp
a
great
request,
star
option.
My
co-author
is
john
cochran
from
the
university
of
cambridge.
First
of
all,
let's
take
a
look
at
the
motivation
for
this
draft
delay.
Dax
is
a
widely
used
mechanism
which
is
intended
to
reduce
protocol
overhead.
M
However,
in
some
cases
it
may
also
contribute
to
suboptimal
performance.
We
can
identify
two
types
of
scenarios
here.
The
first
is
so-called
large
congestion
window
scenarios.
That
means
when
the
congestion
window
size
is
much
greater
than
the
mss,
where
saving
up
to
one
of
every
two
acts
may
be
insufficient.
M
Then
there's
also
so-called
small
congestion
window
scenarios.
That
means
with
the
congestion
window
size
up
to
the
order
of
around
one
mss,
for
example,
this
would
be
in
data
centers,
where
the
bandwidth
delay
product
is
up
to
the
order
of
one
mss,
and
in
this
case
the
led
x
will
incur
a
delay
much
greater
than
the
rtd,
and
also
in
transactional
data
exchanges
or
when
the
congestion
window
has
decreased.
The
ability
of
requesting
immediate
acts
may
help
avoid
idle
times
long,
faster
congestion
window
growth
so
about
the
status
of
the
draft.
M
So,
let's
go
through
the
updates
in
zero.
Three,
the
first
one
is
in
the
main
format
of
the
option.
Previously
we
had
a
six
byte
format,
and
now
it's
just
five
bytes.
Basically,
we
have
remove
the
field
called
n
here.
This
was
intended
so
that
when
r
was
set
to
zero,
the
sender
could
request
immediate
acts
for
the
next
n
segments.
However,
it
was
found
that
this
was
mostly
redundant,
so
it's
possible
to
produce
the
same
effect
without
an
explicit
field
called
n
and
then
also
in
the
new
format.
M
The
size
of
the
r
field,
which
corresponds
to
the
ack
raid
requested,
has
a
size
of
six
bits
and
also
there's
one
new
bit,
which
would
be
at
the
moment
reserved.
This
is
called
v
so
for
the
r
field
there
there
are
like
two
possible
encodings
on
the
table.
The
first
one
would
be
like
the
straightforward
approach,
which
is
the
binary
encoding
of
the
requested
ack
rate
by
the
maximum
value
of
r.
Since
we
have
64.
M
Well,
I
don't
know
if
there
are
comments
on
this,
or
perhaps
we
can
discuss
at
the
end
by
the
way.
Since
I'm
sharing
my
screen,
I
cannot
see
at
all
the
queue
okay,
so,
okay.
So
then
we
also
have
updates
in
section
three
which
describes
the
behavior
of
the
sender,
the
receiver.
M
First,
we
state
that
a
direction
capable
receiving
tcp
should
modify
its
aggregate
to
one
and
every
r
receive
data
segments
from
the
sender,
and
this
used
to
be
a
must
in
previous
versions.
However,
it's
been
modified
to
shoot,
because
actually
there
may
be
several
reasons
why
a
receiver
might
not
be
able
to
satisfy
the
request.
M
So
I
don't
know
if
there
may
be
comments
on
this.
Also,
there
was
another
suggestion
to
aggregate
this
option
with
others
as
in
yoshi's
draft,
and
another
clarification
is
that
a
tcp
segment
carrying
retransmitted
data
is
not
required
to
include
at
our
options.
So
if
the
original
segment
carry
the
tar
option,
the
retransmission
is
not
required
to
to
also
carry
the
same
star
option,
and
then
there
have
been
several
comments
about
the
ignore
order
feature
here.
This
was
suggested
once
in
a
previous
tcpm
meeting.
M
The
idea
behind
the
future
would
be
to
allow
a
sender
tell
the
receiver
that
it
has
some
tolerance
of
our
data
packets.
So
then
it's
not
necessary
for
the
receiver
to
to
send
immediate
acts
when
there
are
like
ignore,
well
reordered
data
packets.
However,
it
seems
like
the
benefit
that
is
seen
from
this
feature
is
not
so
clear
or
not
so
significant,
so
the
proposal
that
we
have
like
on
the
table
for
the
next
update
of
the
draft
would
be
to
just
remove
the
feature.
J
As
an
individual,
so
on
the
mantissa
thing,
am
I
correct
in
saying
there's
no
way
to
express
zero
without
encoding.
J
J
Yeah
say
no
more,
all
right
next
slide,
please!
So
if
you
send
the
tar
with
in
response
to
senec,
that
is
not
sent
reliably.
M
M
A
Hi
one
question:
so
I
yeah
so
in
case
of
option:
two,
the
maximum
value
of
r
is
124,
which
is
very
big.
I
don't
know
which
what
kind
of
use
case
to
have
such
kind
of
large
value
for
r.
M
Yeah,
I
don't
know
if
maybe
bob
would
like
to
reply
himself,
but
it
seemed
like
some
people
mentioned
that
the
maximum
value
of
63
could
be
fine.
However,
bob
considered
like
the
future,
so
when
this
option
might
be
used
several
years
from
now,
and
perhaps
the
capacity
of
links,
change
and
so
on.
So
I
don't
know
if
maybe
bob
would
like
to
also
add
something.
G
A
P
M
Yeah
thanks
for
the
consideration,
so
perhaps
the
the
point
that
bob
had
was
considering-
maybe
not
today's
scenarios,
but
maybe
future
scenarios.
So
the
idea
was
being
able
to
support
like
larger
values
but
yeah.
That
may
be,
of
course,
something
to
take
into
account
that
also
a
very
large
value
may
have
issues,
as
you
just
explained.
Thank
you.
Yeah.
A
A
N
Okay,
hello:
this
is
pony
from
china,
mobile
and
I'd
like
to
pre
present
the
multiple
tcp
robust
session
establishment,
and
since
this
work
has
been
presented
in
the
past
years,
I'd
like
to
have
a
recap
of
it.
First,
this
document
want
to
solve
the
problem
of
connection
setup
failure
that
is
essentially
caused
by
establishing
a
connection
only
on
default
paths
with
unknown
path.
Letters
when
we
use
multiple
multipath
protocol,
the
session
will
be
established
in
a
sequence
and
the
default
bus
is
selected
as
the
first
one
by
default.
Ule,
maybe
wi-fi.
N
If
there
is
no
wi-fi
signal,
405
gxs
will
be
selected.
However,
when
the
wi-fi
signal
is
bad
and
the
delay
is
large,
the
default
path
can't
be
established,
which
will
also
affect
the
link
of
the
second
connection.
So
this
problem
is
a
real
one
and
has
been
occurring
during
modified
protocol
deployment
and
implementation.
N
N
It
is
a
set
of
extensions
to
regular
mvtcp
and
btw
version.
One
is
designed
to
provide
provide
a
more
robust
establishment
of
mptsb
sessions.
It
has
four
methods,
including
1
theme
and
e3
and
ips.
It
also
presents
a
design
and
the
protocol
procedure
of
the
combination
scenario.
In
addition
to
these
standard
alarm
solutions,
for
example,
the
combination
of
the
same
and
ips
and
combination
of
timer
and
rps,
so
a
very
short
solution.
Recap
of
those
methods.
N
Result,
let's
say
again
to
network
voltage
is
achieved
by
modifying
the
thing
retransmission
type
timer.
If
one
path
is
defective,
another
path
is
used
and
for
the
same,
one
provides
the
ability
to
simultaneously
use
multipath
for
connection
setup
and
rst
is
used
to
terminate
connection
setup
on
other
paths
when
connection
has
been
established
on
the
first
path
and
there's
also,
the
ethereum
provides
ability
to
simultaneously
use
multiple
paths,
and
I
mean
joint
cap
is
used
for
decreasing
overhead
merging
all
simultaneous
established
path
without
a
joint
process.
N
So
this
draft
has
been
presented
several
times
from
itaf16,
and
the
last
presentation
was
in
idf,
for
the
iso
is
negotiating
with
shares
the
possibility
for
getting
rid
of
the
ipr
blocking
issue
towards
adoption
of
it.
So
the
criteria
is
something
that
authors
can
work
on
by
taking
to
the
other
people
who
need
the
publication
of
the
unwanted
the
update.
For
instance,
a
presentation
from
the
network
operator,
rather
than
dealing
with
test
results
of
the
suggested
mechanism,
would
be
interested
input
to
the
working
group.
N
So
in
this
case
I
found
this
work
was
really
valuable
and
also
planned
for
the
joint
testing
work,
but
due
to
the
coverage,
it
is
too
hard
for
us
to
find
a
place
to
do
it.
So
I
checked
the
test
before
and
it
showed
the
obvious
effect
efficiency
of
this
method
and
there's
also
the
three
start
forms
one
or
six.
You
can
see
the
demo
and
the
hexane
will
be
done
into.
N
I
can
one
seven
and
one
eight,
so
I
just
reviewed
the
overall
work,
the
draft
check
them
demo
and
the
ipr
disclosures
with
the
license.
This
draft
has
major
enough
and
completed.
I
wish
my
joining
without
ipr
issue
could
help
to
promote
this
work.
So
I'd
like
to
ask
if
we
could
call
for
adoption
of
this
draft
thanks
and
any
comments.
C
A
A
This
is
yeah
good
things
having
support
is
very
good
thing,
but
what
we
really
expect
is
you
know
showing
some
experience
or
operational
experience
by
implementation
experience,
and
then
you
can
demonstrate
that
this
is
a
great
idea
and
then,
at
the
same
time,
since
you
are
not
also,
we
can
know
how
you
deal
with
the
ipr
issues
and
that
kind
of
information
will
be
very
useful
for
us
to
think
about
how
to
proceed
around,
but
just
supporting
is
just
one
step,
but
we
need
more
steps.
That's
my
comment.
N
Yeah:
okay,
in
fact,
we
we
plan
the
test
and
other
more
things,
but
haven't
been
successful.
G
So
you
mentioned
that
the
idea
can
also
be
applied
to
mpdccp
and
be
quick,
sending
doing
connection
setups
simultaneously
or
wire.
Timers
is
something
which
is
done
in
sctp,
though
so
there's
prior
art
for
other
protocols.
Do
your
does
your
ipr
cover,
also
mp,
quick
and
mpdccp,
are?
Is
your
ipr
specific
to
mp
or
tcp.
G
Because,
normally
you
write
iprs
in
a
very
wide
scope.
That's
why
I'm
asking
and
the
other
question
is
for
this
stuff
being
deployed.
Is
it
needed
to
be
implemented
in
an
open
source
operating
system,
or
is
it
okay
if
it's
only
implemented
in
proprietary
systems,.
A
Okay,
yeah,
I
think,
as
I
mentioned,
that
you
know
showing
support
from
you
is
very
good
and
for
the
draft,
but
it's
still
a
little
difficult
for
you
know.
We
can
think
this
is
ready
for
working
group
adaption
because,
as
I
mentioned,
no,
I
want
to
see
more
broad
support
or
you
can
show
some
more
solid
test
results
or
implementation
experiment.
Then
you
can
convince
people
that
this
is
not
good
item
for
this
parking
group
right
now.
It
just
was
just
one
support,
so
I
think
we
need
more
supports
or
more
experiments.
O
Okay,
we'll
try
to
make
it
a
bit
faster
than
planned
to
leave
some
room
for
a
discussion,
because
that's
really
the
intent
of
presenting
our
work
here
in
the
tcpm
so
yeah.
First,
I
would
like
to
to
thank
tcpm
for
giving
us
the
opportunity
to
to
present
this
track
for
the
first
time
within
itf,
so
this
strap
is
called
tcps,
modern
transport
services
with
tcp
and
tls,
and
you
might
be
wondering
whether
tcpls
is
an
acronym.
O
So
I
will
not
detail
the
content
of
the
preset
presentation.
We
will
look
at
that
just
just
now.
So,
first
as
an
introduction,
I
would
like
to
go
back
to
two
important
protocol
that
have
been
designed
within
the
itf
and
that
extended
the
transports.
O
The
transport
services
provided
so
the
first
one
is
mptcp
which
started
in
2009
and
the
first
specification
shipped
into
2013
and
enabled
bandwidth
aggregation
with
several
subflows
fade
over
in
case
of
network
failure
with
a
make
before
break
or
break
before
make
mechanism,
and
it
was
made
in
a
way
that
was
backward
compatible
with
tcp.
O
So
so
those
were
the
main
benefits
of
of
mptcp,
but
it
also
had
some
issues.
So
the
address
exchange
mechanism
was
not
really
secure,
although
it
improved
it
improved
in
v1
and
it
used
tcp
and
the
tcp
options
which
are
prone
to
middle
box
interference,
and
so
that
really
drive
the
design
of
mptcp
and
made
the
design
a
bit
more
difficult
to
to
make.
So
mptp
also
proved
to
be
maybe
difficult
to
implement,
and
I
think
we
can
take
the
seven
year
journey
from
this
v0
specification
to
mainline
linux
as
an
example.
O
As
an
example
of
that,
of
course,
many
things
happen
in
that
time
frame.
But
still
it's
a
long
time,
so.
Another
important
protocol
that
was
more
recently
designed
is
quick,
which
started
in
2016
with
the
goal
of
designing
an
udp
based
transport
protocol
and
which
the
first
version
of
the
protocol
was
shipped
into
in
last
year.
Actually
and
it
enabled
stream
multiplexing
without
head
offline,
docking
and
connection
migration
and
failover
as
well.
O
So
I
think
an
important
point
that
we
we
need
to
look
at
today
is
that
we
know
that
tls
is
the
most
used
protocol
at
tcp
and
the
latest
version
is
even
expanding.
The
the
use
of
encryption
to
extend
the
tls
protocol
with
encrypted
tls
records
and
encrypted
extension
and
and
together
with
the
fact
that
tcp
supports
in
the
network
and
in
operating
system
and
kernel
remains
wider.
Tcp
is
also
the
fallback
of
quick.
O
Currently
still
so,
tcp
will
remain
for
a
long
time
and
given
the
ubiquity
of
tls
and
the
lesson
we
learned
when
designing
quick.
The
question
I
would
like
to
to
ask
is-
or
it's
a
rhetorical
question,
of
course,
but
the
question
is:
can
we
provide
new
transfer
services
with
tcp
and
tls,
and,
and
the
answer
in
this
presentation
is-
is
of
course
a
positive
one.
O
O
O
O
What
it
does
is
is
introducing
a
new
tls
extension
that
indicates
the
support
of
tcpls.
So
here
this
extension
is
exchanged
in
the
the
sequence
diagram
in
the
client,
hello
extension
and
in
the
server
hello,
extended
extension
and
at
the
end
of
the
tls
and
shake
both
end
points
know
they
use
tcpls.
O
O
O
It
has
another
tls
record
here,
which
is
the
clear
text
record,
and
this
record
contains
the
series
of
tcps
frame.
Here.
It's
a
single
stream
frame.
We
will
not
go
over
the
details
of
the
fields.
I
think
all
of
you
can
recognize
very
strong
similarities
to
what
have
been
done
in
http
2
and
in
quick.
O
So
obviously,
as
you
probably
wondering
now,
those
streams
put
in
a
single
connection
are
subject
to
edifying
blocking
this
is
equivalent
to
what
have
been
achieved
with
http
2..
So
what
we
propose
is
also
a
way
to
for
tcp
allies
to
manage
several
tcp
connection
in
a
single
session
and
schedule
the
records
across
those
tcp
connections
and
then,
by
giving
the
possibility
for
the
application
to
map
the
streams
to
connections
the
application
can
can
choose
the
stream
it
wants
to
protect
from
other
flight
blocking
from
others.
O
So
here
in
our
example,
we
have
stream
a
and
stream
b
on
a
single
connection
and
then
stream
c
and
stream
d
on
this
on
another
connection
of
the
session,
and
so
stream
c
and
stream
d
will
be
protected
from
that
offline
blocking
happening
on
the
first
connection
and
involving
string
stream
a
and
stream
b.
O
So
this
brings
the
question
of
how
can
we
add
tcp
connection
to
a
cpls
session?
The
mechanism
we
have
designed
involves
the
server
giving
tokens
to
the
clients
and
the
clients
using
those
tokens
to
open
a
new
connection.
So
here
we
have
the
server
giving
a
token
inside
a
frame.
So
this
token
has
the
value
abc
and
it
has
the
sequence
number
one
by
controlling
the
number
of
tokens
given
to
the
client
and
the
server
controls.
O
The
number
of
connection
what
the
client
can
do
is
then
open
a
new
tcp
connection
and
then
use
another
tls
extension
which
is
called
tcplus
join.
Using
this
token
value
and
the
server
use
this
token
value
to
identify
and
join
the
tcp
connection
to
the
tls
session
and
then
at
the
end
of
the
of
the
token
exchange,
the
connection
is
open
and
it
has
a
given
connection
id,
which
we
will
detail
what
it
is
used
for
in
in
the
coming
slides.
O
So
the
question
now
is
what
kind
of
crypto
material
is
used
in
this
additional
connection.
So
if
we
look
at
tls
tls
encrypt
each
nonce
using
each
record,
sorry
using
a
unique
nonce-
and
this
is
the
the
non-derivation
that
is
specified
in
tls
1.3.
O
O
So,
as
is,
we
cannot
share
this
sequence
among
tcp
connections
of
a
tcpl
session,
because
then
the
receiver
has
no
way
to
to
know
which
record
sequence
the
the
incoming
records
are,
and
so
in
in
the
sequence
diagram
I've
shown
you.
O
I've
shown
you
some
tls
message
that
resembles
a
tls
handshake,
but
actually
we
do
not
want
to
do
a
full,
tls
handshake
and
then
and
derive
a
new
initial
vector,
for
instance,
because
that's
this
is
costly,
and
so
what
we
propose
is
to
change
the
initial
vector,
non-script,
construction,
sorry
and
add
the
connection
id
so
the
id
of
the
tcp
connection,
part
of
the
tcpl
session
and
claw
it
to
the
upper
bits
of
the
iv
and
by
doing
that
and
keeping
a
per
connection
record
sequence.
O
O
Another
service
we
would
like
to
to
provide
with
our
protocol
is
a
failover,
and
so
in
failover
the
the
idea
is
that
when
a
client
has
here
an
example,
a
client
has
opened
a
connection
to
the
server
and
sends
two
chunks
of
stream
stream
a
and
string
b
and
something
bad
happened
to
string
b.
It
never
reaches
the
server
and
actually
the
wall.
Tcp
connection
is
disrupted
either
by
the
middle
box
or
by
a
network
failure
for
instance.
O
O
O
We
have
the
the
client
sending
sorry
two
tls
application
data
records
over
connection
id
0
and
the
second
never
reaches,
as
I
explained,
and
the
connection
gets
disrupted.
The
question
becomes,
how
can
the
client
know
which
one
never
reached
the
server
and
retransmits
the
one
needed,
and
so
for
that
we
introduce
tcpls
acknowledgements,
and
so
those
acknowledgements
enable
the
sender
to
know
which
are
the
tls
records
that
were
effectively
received.
O
We
cannot
rely
on
on
tcp
acknowledgements
because
they
just
mean
that
the
byte
stream
reached
the
receiver,
but
we
do
not
know
whether
it
was
read
by
the
application,
whether
it
was
stuck
in
the
kernel
buffers
or
whether
the
decryption
was
successful,
and
so
that's
why
we
need
those
acknowledgements.
O
O
So
here
in
our
example,
the
server
could
send
a
knack
to
the
client.
Sorry,
it
states
server
cid
on.
On
the
right
hand,
side
is
supposed
to
state
client
cid,
so
the
server
sends
back
to
the
client
a
knack
and
indicates
that
on
cid
0
it
has
received
up
to
sequence,
number
3,
and
then
the
client
can
make
the
right
decision
and
retransmit
the
loss
frames
and
eventually,
those
last
frame
get
acknowledged
as
part
of
the
the
records
containing
the
two
acts
on
the
bottom.
O
So
using
all
that,
we
are
also
able
to
do
bandwidth
aggregation
simply
by
chunking,
a
stream
in
some
stream
frame
and
then
send
them
over
different
tls
records
over
the
two
connections.
O
So
I've
tried
to
quickly
present
the
protocol
here,
it's
more.
It's
more
explained
in
detail
in
the
in
the
draft,
we're
very
welcoming
any
feedback
and
comment
on
the
draft
either
on
the
protocol
or
the
use
cases
the
services
we
we
presented.
We
will
continue
working
on
this
protocol.
O
There
are
obviously
some
part
that
needs
to
be
discussed
and
designed.
You
probably
didn't
mind
congestion,
control
and
flow
control.
We
also
do,
of
course,
but
also
maybe
looking
at
further
ways
to
use
the
tls
extension
to
to
to
enable
new
services
in
ntc
pls
or
even
some,
how
to
make
the
receiver
zero
copy,
as
in
tls
1.3
and
all
those
kinds
of
stuff
we're
really
interested
in
looking
in
the
future.
O
So
the
draft
follow
the
prel
preliminary
version
of
the
tcps
protocol
that
was
presented
in
a
paper
at
conex21
that
you
can
read.
If
you're
interested
in
more
details
on
our
approach,
we
got
also
some
early
implementation
feedback
as
we
have
implemented
the
last.
The
latest
draft
we've
implemented
it
on
top
of
a
tls
1.3
implementation
in
c
called
fico
tls.
O
This
required
us
to
modify
50
lines
of
the
pico
tls
library
so
to
to
implement
the
the
iv
nonce
construction
I've
presented
and
we
were
able
to
implement
stream
multiplexing,
failover
and
multipath
in
about
2.5
k,
lines
of
c
codes
and
this
product,
this
prototype
will
be
released
under
an
open
source
license
in
the
in
the
coming
weeks.
O
A
A
O
Yeah,
I
think
it's
interesting
for
tcpm
people,
because
also
people
that
participated
in
mptcp
are
have
joined.
The
group
and
here
we're
touching
on
services
that
are
addressed
by
mptcp
and
we
believe
tls
opens
a
new
way
to
address
those
services.
So
that's
why
we
think
the
presentation
was
fit
here.
R
R
O
Actually,
the
the
way
we
changed,
the
nonce
construction
is
quite
simple
and
doesn't
require
the
server
to
issue
those
but
issue
a
bit
simpler
values
which
are
those
tokens
I
presented.
But
this
is
something
we
could
look
at
whether
using
this
is
is
more
fit
and
requires
less
stuff
to
invent
than
the
normal
proposition
yeah.
O
Yeah
but
the
the
the
change
in
the
iv
that
we
propose
is
very
reminiscent
of
of
many
discussion.
I
I've
I've
seen
at
the
atf,
so
this
is
stuff
that
they're
considering
for
mp,
quick,
for
instance-
and
I
think
I've
seen
some
discussion
in
that
part
for
dtls,
although
I
don't
remember,
for
which
mechanism
it
was
maybe
key
recognition,
negotiation
or
something.
R
Yeah
crypt
analysis
by
intuition
is
generally
not
recommended.
A
Okay
yeah:
this
is
interesting
idea,
but
yeah.
We
are
still
not
sure
if
this
is
a
topic
for
tcpm.
So
let's
continue
the
discussion
on
the
tcpm
mailing
list.
Of
course
yeah
thanks
so
much!
Thank
you!
Okay,
thank
you.
So
this
is.
We
have
our
presentation
thanks
so
much
for
your
participation,
thanks
for
very
thanks.
So
much
for
very
productive
discussion,
so
see
you
next
itf
thanks.
So
much.
H
A
Yeah
but
I'm
going
to
go
back
right
now.