►
From YouTube: IETF105-TCPM-20190725-1330
Description
TCPM meeting session at IETF105
2019/07/25 1330
https://datatracker.ietf.org/meeting/105/proceedings/
B
B
B
So
that's
the
agenda.
We
give
awaking
working
group
of
status,
update.
We
have
the
talks
or
presentations
regarding
three
working
group
documents:
RFC
21,
40
bits,
the
RFC
793
bits
and
a
document
which
is
at
the
end
of
the
last
call
period,
the
converter
draft,
and
after
that
we
have
several
presentations
regarding
non
working
group
documents.
One
is
about
hi,
start
plus
plus,
then
two
presentations
regarding
yang
on
gory
walls
says
something
generic
regarding
conditional
control
and
at
the
end
we
have
a
presentation
regarding
the
new
TCP
proposed
TCP
a
feature
called
TCP
export.
B
C
Yes,
so
the
first
document
that
you're
about
to
finish
is
deity
chicken
converter
one
so
for
that
I
have
started
a
working
class
code.
You
see
here
on
the
slide
that
I've
put
in
there,
but
almost
finished
almost
means
that
I'm
basically
waiting
for
confirmation
from
felt
that
he's
or
a
pretty
comprehensive
review
and
has
been
addressed
other
than
that
I
think
we
are
find
that
this
my
blend
is
too
close
to
work
in
coop.
Last
call
once
I
have
this
confirmation
from
foe
and
then
we
will
move
on.
C
You
don't
have
a
presentation
on
this
draft
later
in
this
session,
but,
as
I
said
I'm
going
to
close
to
work
in
coop
last
call
once
I
have
to
conversate
confirmation
from
fault
that
all
comes
addressed
and
finally
I've
done
this
on
the
list
already
I
would
just
want
to
give
the
heads
up.
We
have
seen.
A
C
A
C
B
B
D
E
B
F
Hi
this
is
Praveen
I
had
a
comment
on
the
rack
document,
so
it
seems
to
be
stable,
I
think
we
gave
some
feedback
and
that's
been
updated,
so
I
think
it
would
be
very.
It's
very
close
to
last
call,
in
my
opinion,
other
than
minor
editorial
fixes.
It
seems
ready
to
me.
So
what
do
the
chairs
think
about
that.
B
Basically,
that
says:
share
that
you,
so
we
will
contact
the
authors
and
see
how
to
progress
that
it
would
be
good
if
you
can
provide
the
feedback
to
the
wrist
or
to
the
authors,
probably
better
to
the
list.
I
will.
B
G
G
G
That's
good
got
it:
okay,
I
guess.
One
important
thing
to
point
out
about
that
slide
is
the
number
o
six,
which
means
that
this
thing
has
gone
through
several
updates
and
reviews
and
comments
in
this
group
already
before
adoption.
That
was
because,
at
that
time
we
were
asking
for
PCP
and
that's
no
longer
the
case.
Now
it's
informational
and
adopted
as
such
quick
list
of
some
updates
that
we
made
we
reissued
it
with
the
correct
name.
We
cleaned
some
often
references
to
T
TCP.
G
There
was
a
lot
of
some
some
well
ancient
texts
in
in
this
draft
that
really
had
to
be
removed
from
from
21:40,
moved
references
to
the
informative
section
and
updated
section
to
updated
to
clarify
that
there
is
no
impact
on
interoperability
and
updated
appendix
B,
as
per
request.
So,
regarding
next
steps,
we
as
two
others.
H
G
It
through
we
don't
find
very
much.
We
believe
this
is
pretty
much
ready
except
there's.
One
thing
that
we
would
like
to
include
the
document
discusses
in
Samba
sharing
and
temporal
sharing
right,
sharing
assemblage.
When
you
have
parallel
ongoing
TCP
connections,
they
would
share
information
between
them
and
temporal
is
connection,
is
over
new
connection
comes,
and
you
should
learn
something
from
the
previous
now
it
just
talks
about
these
things
and
it
doesn't
really
say
anything
about
time.
G
G
G
So
these
are
two
very
different
cases
and
if
you
have
some
very
long,
lasting
state,
this
may
also
benefit
from
special
consideration.
There
are
some
constants
that
you
may
find
all
the
time.
I've
never
worked.
You
know
if
something
never
worked
in
a
week
on
the
same
path
with
the
same
host
with
the
same
destination
and
the
host
is
maybe
not
the
mobile
one.
Staying
in
the
same
place
at
some
point,
it
may
make
sense
to
Jo
and
start
adjusting
them.
G
So
there
are
some
ideas
that
come
from
a
previous
draft
from
Jo
touch
this
automatic
adjustment
of
the
initial
window.
This
is
just
an
example
considering
the
initial
window
right
now
we
have
this
experimental
initial
window
10
we
have
some
most
using
larger
values.
We
have
some
hosts
using
smaller
values.
A
question
is
simply
if
you
could
be
a
little
bit
more
dynamic.
G
Well,
so,
if
you
know,
for
instance,
a
very
large
initial
window
never
work
to
one
destination
for
a
long
long
time,
it
probably
isn't
really
useful
to
keep
if
the
sender's
congestion
window
always
gets
much
larger
than
the
initial
window
is,
and
why
limit
it
to
ten?
Maybe
if
you
do
pay
spacing
as
well.
G
The
interesting
thing
here
is
that
adding
send
some
dynami
city
to
this
would
automatically
do
an
experiment
and
auto
adjust
to
the
environment,
so
you
that
is
in
line
with
this
other
draft
from
Joe.
If
you
would
have
an
old
and
a
slow
network
connection
that
has
a
small,
a
small
note
that
might
just
never
increase
its
initial
window
even
over
weeks
a
month
so
that
this
draft
talks
about
you
know
long
timescales,
my
own
office
machine,
where
the
network
is
prepared,
your
own
permanently
upgraded
together
with
a
backbone
that
might
dynamically
change
the
windows.
G
Regarding
these
things,
there
are
different
types
of
feedback
that
can
influence
these
distant
Emma
City.
Currently,
this
is
all
convergence
based
the
end
values
affect
the
future
predictions
simply
by
how
the
statistics
aggregate,
but
there
are
other
ways
that
that
could
be
done.
We
would
call
them
implicit
and
explicit
implicit
will
be
a
matter
of
tracking.
Well,
initial
conditions
persist
over
a
connection
I
suppose
it
could
be
a
very
specific
thing,
like
looking
at
a
particular
packet
exchange.
Again,
this
will
be
just
a
discussion
of
these
possibilities.
I
So
very
fast,
em
ya,
hunger
talk
I'm,
going
to
talk
later
but
and
I'm
curious
to
know
what
sort
of
numbers
you
have
from
other
RFC's.
Just
these
time
periods
or
whether
you're
gonna
say
oh,
we
can
do
it
in
future.
So
did
you
come
across
any
numbers
to
say
how
long
you
think
a
past
valid
I
did
not
okay.
B
J
J
Can
see
them
so
yeah,
I'm
Wes
and
it's
nice
to
be
virtually
meeting
today.
I
haven't
been
able
to
make
it
in
person
for
a
while,
but
have
been
working
on
RFC
793
biz
for
quite
a
while
and
thought.
It
was
really
important
to
try
to
talk
virtually
about
it
today
because
well,
I
think
it's
needing
to
get
ready
for
a
working
group
last
call
soon,
because
it's
getting
at
a
point
where
the
things
we've
identified
to
do
are
pretty
much
done
and
there's
not
a
lot
of
churn
happening.
J
And
I
wanted
to
take
a
minute
just
to
because
we
haven't
been
talking
about
this
at
the
past
few
meetings.
Much
just
give
the
background,
so
people
remember
what
we're
trying
to
do
so.
Rfc
793,
everyone,
hopefully
knows
is
the
TCP
specification.
It's
been
around
for
a
very
long
time
and
was
never
actually
updated
fully.
Although
a
lot
of
things
updated
partially
and
so
several
years
ago,
working
group
decided
to
adopt
an
attempt
to
update
it
and
to
make
that
tractable.
J
We
said
going
in
sort
of
the
approaches
to
not
open
up
any
new
topics
really,
but
just
to
focus
on
dealing
with
accepted
errata,
so
things
that
have
been
submitted
over
the
years
and
collected
through
the
arts.
The
editors
I
ran
a
process,
and
then
the
area
directors
have
talked
to
the
work
groups
usually
and
market
them
either
verified
or
held
for
document
update
or
rejected
we're
not
going
to
put
the
rejected
ones
into
the
document.
J
Then
we
collected
all
the
other
rfcs
that
update
parts
of
793
and
tried
to
make
sure
that
those
updated
parts
got
reflected,
and
you
know
most
importantly,
were
we're-
not
opening
us.
This
up,
like
there,
was
some
discussion
early
on
about
changing
the
semantics
of
something
like
the
options
link
to
do
long
options
and
stuff,
like
that.
Oh
we're
not
not
chartered
to
do
that
in
this
document.
So
let's
not
talk
about
that
today.
J
The
goal,
though,
that
I've
been
using
as
editor
of
the
document,
is
to
trying
to
patch
in
the
exact
text,
from
all
of
the
relevant
documents,
wherever
possible,
as
you'll
probably
notice
in
reading
it.
That
doesn't
always
lead
to
the
the
most
smooth
a
flowing
document.
But
the
hope
is
that
it
prevents
us
from
adding
any
mistakes
so
next
chart.
J
So
I
posted
a
revision
13
that
you
can
find
on
the
ITF
site
now
and
if
you
check
the
diffs
you'll
notice,
there
are
only
very
small
updates
and
those
are
pretty
boring.
So
adding
a
section
reference
about
triggering
this
error
report
functionality
for
different
ICMP
types,
there's
one
open
issue
that
I
have
some
notes
around
and
we'll
talk
about
in
a
little
bit
and
then
I
had
been
marking
the
IANA
considerations
as
like
a
to-do
thing
to
to
analyze
whether
there
were
any
registries
that
needed
to
be
updated
or
something
like
that.
J
But
I
did
sort
of
a
check
on
that
and
didn't
think
that
there
was
anything
further
than
needed
to
be
done
there.
So
I
just
sort
of
took
away
that
to-do
item.
So
with
that
update,
the
only
thing
I
think
that's
left
to
fix
is
this
issue
that
we'll
talk
about
next
I
have
a
revision
of
14.
It's
in
work.
You
can
find
it
on
the
in
the
git
repository
that
this
look
at
URL.
J
There
points
to
and
have
already
made
fixes
there
to
version
13
based
on
Gregg
Skinner's
review
that
went
to
the
mailing
list.
Also
Michel
scharff's
reviewed
the
mailing
list
a
few
days
ago
and
a
couple
of
other
small
things
that
I
noticed
so
the
the
bullet
at
the
bottom
is
actually
already
outdated
because
yesterday
I
I
added
a
commit
that
addresses
Michael
sharps
review.
J
J
J
Thing
I'm
going
to
propose,
though,
after
looking
at
a
bit
more
detail,
myself
is
my
proposal.
Is
that
we
actually
don't
do
anything.
What
I'm
going
to
talk
just
briefly
to
convince
you
that
maybe
that's
the
right
insert
so
I
had
something
I'm
worked
to
do
in
the
document,
because
I
thought
there
was
an
issue
around.
J
What's
called
error
report
functionality
in
RFC
1122
describes
it
now,
that's
kind
of
a
funny
thing
already,
because
in
TCP
stacks
with
the
socket
interface,
we
don't
exactly
have
something
that
matches
the
error
report
functionality,
though
they're
sort
of
ways
that
that
what's
described
happens.
So
it's
it's
sort
of
a
generic
functionality.
Anyways
now
RFC
1122
to
me
looked
inconsistent
in
one
of
the
aspects
of
describing
this
error
report
functionality,
because
there
is
one
section
that
says
these
reports
must
be
triggered
in
the
case
of
excessive
retransmissions.
J
So
when
the
there's
there's,
you
know,
threshold
for
giving
up
and
there's
another
threshold
for
notifying
application
and
also
notifying
IP
that
there
seems
to
be
a
connectivity
issue.
So
it
appeared
for
me
that
the
text
said
you
must
trigger
reports
when
you
hit
that
first
threshold,
where
there
it
looks
like
there's
a
connectivity
issue
and
then
I
found
another
place
in
the
document.
J
J
So
that's
that's
even
more
confusing
than
it
needs
to
be,
but
that
second
bullet
seems
to
say
you
should
trigger
the
error
report
and
of
course
I
was
confused
by
this
and
discussing
it
with
the
chairs
a
little
bit.
One
of
the
things
that
came
out
is,
I
think,
there's
a
possibility
that
we
could
interpret
this
as
you
must
implement
the
functionality,
and
you
really
should
generate
the
the
error
report
and
for
reference.
J
I
wanted
to
look
at
Linux
and
Windows
error
codes
and
see
what's
done
here
and
actually
couldn't
find
anything
corresponding
to
this
terms
of
an
error.
That's
generated
for
the
application
to
receive
in
either
case
so
I
think
that
it
may
not
be
the
most
relevant
thing
to
spend
a
ton
of
time
on,
but
my
proposal
is
basically
to
do
nothing
if
people
have
written
stacks
all
these
years,
based
on
the
1122
text
and
not
been
confused,
then
maybe
my
initial
confusion
was
just
almost
this
area,
so
I'd
like
to
ask
for
feedback
on
this.
C
J
I
J
B
Could
talk
some
from
the
floor
just
reading
the
text?
Could
it
be
that
they
should?
You
were
referring
to,
which
you
think
is
inconsistent?
That
should
is
so.
The
sentence
is
tcp
should
and
then
at
the
end,
when
our
one
is
reached
and
before
our
two,
so
in
this
condition
is
assured
in
the
other
one.
It's
a
must.
B
J
Michael
sharp
watch
some
great
comments
and
we
don't
need
to
go
through
them
in
in
much
detail,
but
I
sort
of
summarized
them
here.
So
you
can
see
that
they're
fairly
minor,
so
adding
cross-reference,
deduplicating
some
text.
There
was
some
really
historic
language
in
there
left
over
from
like
the
pdp-11
days
and
stuff,
so
at
that
I
have
removed
in
the
repository
copy
and
I
also
removed
all
the
stale
glossary
terms
about
the
ARPANET
and
things
like
that
which
weren't
actually
referenced
in
the
rest,
the
document
anyways
so
I
think
Michael
and
his
review
agreed.
J
These
are
quite
small
things
and,
in
general,
it's
fairly
stable
and
in
in
good
overall
shape,
which
brings
me
to
I.
Think
my
last
chart
the
next
one
we're
thinking
the
next
steps,
I'm
pretty
sure
we're
converging
so
I'd
like
to
propose
that
we're
doing
it
working
group
last
call
soon,
so
that
people
can
check
if
we
really
are
converged
to
a
good
place
and
I
was
hoping
that
we
could
identify
some
specific
reviewers.
C
C
Okay,
that
is
probably
not
good
enough
for
going
to
work
in
coop
last
call,
but
we
do
need
reviewers
for
this
document
so
and
I'm
now
asking
the
room
all
their
volunteers.
Who
would
want
here
to
read
this
document
and
I
can
personally
say
it's
it's
easy
to
read.
So
it's
it's!
It's
pretty.
Okay,
so
don't
be
worried.
It's
it's!
A
good
document.
C
C
C
C
The
final
question
that
Wes
has
raised
is
there
something
actually
a
little
bit
for
our
ad
on
the
question
whether
we
should
really
ask
for
crotch
area
revenue
I
personally,
actually
agree
with
less.
That
is
something
we
my
in
order
to
get
really
feedback
from
some
of
the
other
areas
that
obviously
heavily
use
TCP.
So
what
I'm
asking
maybe's
in
the
room
so
are
there
any
any
thoughts
on
whether
we
should
do
that
or
if
that's
a
bad
idea
or
when
we
should,
with
it.
K
C
H
C
Sure
I
mean
we
for
sure.
We
all
understand
that
this
document
is
essential
for
the
IHF
as
a
whole,
and
the
ITF
last
call
will
be
essential
for
this
document.
So
that's
the
Landis
would
be
yeah.
We've
just
have
to
discuss
the
the
best
way
to
approach
that
and
to
to
make
the
process
as
smooth
as
possible
and
to
ensure
that,
at
the
end
of
the
day,
we
have
an
excellent
document.
So
are
there
any
other
comments
on
on
the
process?
Here
seem
to
be
the
case.
C
And
then,
once
we
have
to
read
feedback,
we
will
think
about
starting
a
working
group.
Last
call,
of
course
it
depends
a
little
bit
on
on
the
timing
of
the
reviews
and
the
amount
of
comments
that
we
get,
but
in
general
at
least
my
intention
would
be
that
we
get
this
off
the
table
as
soon
as
possible,
maybe
already
around
the
next
meeting
or
early
next
year.
So
that
would
be
my
plan.
A
B
L
M
A
N
O
Good
afternoon
I
am
someone
from
Korea
Telecom
and
in
the
meanwhile
generosity
combo
dropped
is
in
the
in
the
Working,
Girl
rustic
or
so
I
asked
my
sis.
Kaikora
telecom
and
data
has
prepared
some
proof
of
concept
of
the
generosity.
Compare
implementation
of
the
rear,
5g,
so
I
will
talk
about
about.
I
will
talk
about
today
about
some
deserted
theory.
Poc.
Please.
O
The
usual
motivation
of
the
general
TTT
speak,
but
protocol
is
to
provide
the
rod.
The
number
of
current
TCP
enable
crying
uses,
multiple
TCP,
because
the
number
of
client
had
multiple
much
participe
is
much
more
large
number
of
than
the
number
of
servers.
So
actually
the
crime
I
just
wanted
to
benefit
from
the
amputee
CP,
at
least
on
the
fraction
of
the
end-to-end
past,
even
in
the
in
the
wireless
or
wired,
or
something
something
like
that.
But
there's
a
figure
yeah.
It
looks
like
a
somewhat
yeah
Roku,
not
there,
but
anyway.
O
So
actually
the
company
protocol
profile
the
generic
one,
because
we
can
convert
any
kind
of
TCP
options
and,
for
example,
TFO
sack
or
something
timestamp,
or
even
even
including
the
MP
TCP.
So
we
are
now
in
the
TCP
I'm
working
group,
so
it
is,
can
be,
it
could
be
offset
to
window
generally.
So
next
slide,
please.
So.
O
A
basic
enjoying
of
the
Girardi
combo
protocol
is
an
application
level
protocol
and
it
is
reasoning
on
the
specific
TCP
port.
Similarly,
web
service
is
pure
subtract
test
and
all
of
the
commands
and
the
reefs
responses
are
encoded
in
the
TR
v
format
and
though
it
provides
the
more
extensibility
the
other
than
the
other
kind
of
approach.
The
protocol
like
a
sax
or
open
IP
encapsulation,
or
something
like
that.
So
actually,
the
first
comment
are
sent
through
the
scene.
O
Hey
wrote
and
this
information
contains
the
actual
destination
IP
address
and
port
information
and
because
it
uses
on
the
middle
of
the
three-way
handshake
of
regular
TCP.
It
provides
the
diversity
to
me
but
minimize
the
delay
to
establish
new
connections
and
also
the
same
act
message
provides
the
responses
when
the
one
up
to
three
handshake
and
because
we
don't
use
the
any
kind
of
encrypt,
encapsulation
or
proxy
like
protocol
on
top
of
the
application.
So
if
we
call
it
our
praying
transport
mode
between
the
Christ
and
the
converters,
so
what
am
I?
O
O
So
I
simply
example
here
so
there's
a
client
on
the
left
side
and
the
server
on
the
right
side
and
the
converter
proxying
the
meter
side.
So
whenever
were
crying
over
to
make
a
real
connection
at
the
TCP,
they
they
shouldn't
have
to
three-way
handshake,
starting
with
the
same
message.
So
sin
contains
the
server
IP
and
the
port
number
and
the
command
is
to
connect,
is
sent
to
the
camera
and
convert
and
send
the
same
message
to
the
actual
server
based
on
the
information
of
the
current
TLP.
O
Current
connect,
information
and
rating
for
the
select
from
the
server
and
the
combo
is
turned
back
to
the
client
about
the
scenic.
Mss
are
using
the
okay
div,
and
actually
this
Oh
extensible
extending
50p
option.
So
actually,
after
that,
actual
data
goes
through.
The
data
go
through
the
comparator
to
the
server
where.
O
O
So
put
a
proven
concept
that
is
MKT
prepared
later,
so
it's
collaboration
to
increment
that
engineering
about
the
based
on
the
trapped,
but
on
all
six,
but
now
out
button
on
I
by
the
way.
But
in
the
cry
side
we
uses
open
source,
Krantz
rivalry
and
the
y
Shakti
sector
to
see
the
compare
rocks
on
our
Wireshark
were
some
engineering
analysis,
which
is
casting
that
you
are
a
up
there
and
we
compared
our
art.
O
So
for
the
generality
we
used
open
source
acts,
integration
for
the
SAS
version
5
we
uses
the
Red
Sox,
which
is
commercially
users
in
the
KT
telecom
to
our
subscribers
and,
as
you
can
see
in
the
in
the
bottom,
the
left
hand
side
is
shows
the
effect
indicator.
5
the
only
mode
and
the
right
hand
bottom
side
is
shows
the
Wi-Fi
+
5
g.
O
Acceptance
from
the
one
of
the
our
gateway
proxy
servers
so
monitor
come
to
the
center.
We
in
the
in
Decorah
telecom
datacenter.
We
used
toast
the
generosity
combo
a
protocol
supported
hack,
hybrid
access
gateway,
which
is
modified
by
that
there's
a
restaurant
for
the
Cossacks
who
uses
some
commercial
version
of
the
mobile
entity
proxy
and
we
compare
it
that
so,
as
you
can
see
in
the
graph.
The
most
is
the
generosity
compared
multipass
mode.
O
Actually,
at
this
time
we
uses
the
Wi-Fi
as
a
primary
route
route,
so
we
can
get
a
twenty
seven
millisecond
in
average
shorter
agency
and
photo
privacy,
64
millisecond
for
the
director
RTT
converter
agency
and
the
photoshops.
Compared
to
the
generosity
you
can
see
the
for
the
purgatory
case.
More
than
the
three
times
more
than
the
latency
is
a
more
higher
in
the
football
multiple
TCP
mode
of
the
Assads
short
shorter
than
the
HOV
mode.
It's
because
the
in
the
mid,
multiple
TCP
after
thusly
we
handshake
out
the
artistic
collection.
O
The
sax
protocol
procedure
is
called
pseudo.
Some
portions
of
us
asked
for
two
purposes
of
fancy,
and
some
portion
of
from
another
portion
of
the
procedure
call
suta.
Wi-Fi
is
why
the
the
references
is
a
much
smaller
tend
to
fight
you
anymore.
Okay,
next
right
is
so
other
instances
and
us
right
and
we
compare
the
signaling
of
the
diversity
convert
and
the
SAS
refi
and
the
inter
web
site
from
the
commode
toast
crime
site
to
the
proxy.
O
We
requires
around
LTTE,
just
the
TCP
syn
synack
and
the
egg
and
the
pod
from
the
combatants
proxy
site
to
the
server
we
neither
one
RTT
and
by
the
way,
sucks
out
of
5,
because
we
are
using
the
other
education
mod.
We
need
the
flow
ITT's
when
the
client
to
the
proxy
side
each
because
enrich
the
ferocity,
because
some
of
the
sucks
first
seizure
functions
requires
some
processing
time.
O
It
can
be
not
piggy,
backed
by
idea
push
message
each
at
least
the
PO
RT
t
it
can
be
a
motor
diversity
and
the
press
random
and
make
it
precise
to
the
server,
because
the
velocity
so
but
just
well
compared
to
your
activity
in
the
in
the
figure,
the
genetic
can
be,
are
much
less
shorter,
shoot,
later
speed
and
thus
at
Spotify.
So
this
is
a
difficulty
and
that's
right
and
custom.
C
O
N
O
Yes,
actually
because
I,
because
we
need
some
time
to
implement
this
one.
So
actually
we
can
insert
some
identification
message
in
the
TRP
together,
whiskey,
a
scene
or
scenic,
or
something
like
that.
So
we
are
preparing
of
the
that
is
that
another
draught-
and
actually
we
need
a
time
so
I
didn't
fermented
at
that
kind
of
an
authentication
for
this
one.
Yes
and
the.
N
B
O
I
O
Six
fully,
but
the
last
time
I
indicate
Katie
try
to
make
a
somewhat
similar
to
the
subspecies
yeah,
alter
connect
and
the
identification
and
the
mess
mess.
So
the
trustees
are
at
the
same
time
just
one
procedure,
so
we
can
see
we
can
so
that
the
returns
these
are
shorter
than
the
actual
sass
was
a
five,
so
yeah
actually
yeah
that
make
well
yeah
yeah
but
yeah
yeah.
Okay,
thanks
to
my
comment
about
that,.
C
Yeah
thanks
a
lot
is
really
very
useful
information.
We
often
do
this
in
TCP
M.
If
we
finalize
the
document,
it's
always
good
to
have
feedback
from
potential
users.
So
thanks
a
lot
for
this
very
useful
data
and
as
I
said
before,
for
this
document
is
about
to
finish.
The
only
pending
thing
is
is
I
have
to
find
flow.
Basically,
and
then
we
are
done
with
the
document
and
we
will
ship
it
to
di
is
Qi,
then.
So
thanks
a
lot.
Thank
you.
F
Hi,
can
you
hear
me?
Yes,
hi
everyone
today,
I'm
going
to
present
hi
start
plus,
plus
it's
a
modified
version
of
slow
start
for
TCP
work
done
with
others,
but
present
in
today.
Next
slide,
please
so!
First,
let's
quickly,
recap
hi
start
so
traditional
slow
start
for
TCP
has
a
flaw,
so
it
does
exponential
increase,
but
it
can
potentially
overshoot
the
ideal
sending
rate
by
a
lot,
and
this
causes
massive
packet
drops
and
then
TCP
spends
multiple
round-trip
times
recovering
from
those
losses.
F
So
the
Cole
here
is
to
exit
slow
start
early
before
the
Mass
Effect
packet
loss
happens,
so
the
paper
had
two
algorithms
one
was
based
on
deal
increase
and
the
other
was
based
on
entropic
at
arrival.
The
original
paper
suggested
that
the
sender
run
both
of
these
algorithms
in
parallel
and
then
exit
slow
start
early.
F
The
problem
is
that
the
inter
packet
arrival
algorithm
does
not
really
work
well,
because
there
is
a
lot
of
act
compression
happening
both
on
the
receiver
because
of
things
like
RC
as
well
as
I
meant
LRO
and
as
well
as
the
network
doing
ACK
compression
the
dealer
increase
algorithm
works,
but
we
find
that
it
has
a
lot
of
false
positives.
This
happens
because
of
latency
fluctuations
on
wireless
links.
F
It
also
happens
because
there
could
be
a
large
burst
from
another
flow
that
could
cause
transient
queue,
build-up
which
could
last
multiple
round-trip
times,
and
then
you
and
then
the
fluid
ends
up
exiting
slow
start
very
early
and
then
congestion
avoidance
will
take
a
long
time
to
ramp
up
next
slide.
Please.
So
what
is
the
deal
increase?
Algorithm?
The
goal
is
to
keep
track
of
the
minimum
of
0
delta
T
in
each
round,
so
four
rounds
where
so
we
keep
track
of
this
minimum
oddity
until
the
condition
window
is
higher
than
the
minimum
ssthresh
value.
F
F
Even
if
delay
is
increasing
once
that
condition
is
met,
then
we
calculate
bt
a
based
on
the
last
rounds
observed
minimal
T,
and
if
the
current
round,
our
GT
is
greater
than
a
certain
threshold
from
the
last
minimum
observed
our
treaty,
then
we
exit
slow
start
and
we
capture
the
value
of
insets
Thresh
I'm
loading
down
the
constants
that
is
used
in
the
current
implementation
and
windows.
I
I
believe
that
the
constants
are
very
similar
in
Linux,
based
on
mailing
list
discussions
next
slide,
please.
F
So
why
is
this
not
sufficient?
The
problem
here
is
that,
even
if
you
do
this,
only
for
the
initial
slow
start,
the
slow
start
could
exit
prematurely
because
of
the
two
problems
I
mentioned.
So
what
high
star
plus
plus
does
is
introduces
Limited,
slow,
start
right
after
the
TCP
flow
leaves
slow-start.
F
So
basically,
what
we
want
to
do
here
is
for
each
arriving
AK.
We
want
to
still
grow
the
congestion
window,
but
not
exponentially.
So
this
is
an
additive
increase,
its
large
additive
increase,
and
this
will
continue
until
the
congestion
signal
happens.
The
next
congestion
signal
happens,
and
then
we
enter
condition
avoidance.
So
this
is
a
phase
between
slow
start
and
congestion
avoidance.
We
do
limit
reach
slow
start.
The
Ellis's
divisor
we
picked
was
based
on
experimental
results
in
the
lab,
simulating
various
possible
bottleneck,
buffer
links,
buffer
sizes,
as
well
as
RTT,
and
bandwidth
next
slide.
F
Please.
So
when
we
posted
this
draft
to
the
mailing
list,
there
was
a
very
interesting
suggestion
by
Anil
Cardwell.
So
thanks
to
him
so
on
high
bdp
links,
this
can
still
cause
the
congestion
window
to
gross
too
slowly,
if
even
if
we
do
limited
slow
start.
So
the
idea
is
to
basically
also
start
computing,
the
congestion
avoidance
based
congestion
window
and
take
the
max
of
the
value
computed
by
limiter,
slow
start
as
well
as
condition
avoidance
this.
F
Basically,
we
were
able
to
force
this
condition
explicitly,
and
we
observed
that
for
hi
V
DP
links
this
does
end
up
helping,
so
the
ramp
up
is
much
faster
in
cases
where
we
exit
too
early
due
to
high
start
next
slide
please.
So
the
current
status
is
that
this
is
deployed.
This
is
on
by
default
for
all
connections.
I
am
talking
to
you
from
a
system
that
has
this
on
by
default.
F
The
fix
for
the
high
BP
links
is
also
now
in
preview,
so
we
expect
to
be
also
turned
that
on
for
the
next
release,
the
draft
was
actually
posted
to
the
mailing
list.
So
please
review
and
provide
us
feedback.
They
already
had
one
review,
but
I
would
be
I'd
appreciate
it
if
more
people
could
review
this
and
provide
us
feedback
in
future.
A
P
Jen
I
endure
kind
of
doing
this
presentation.
Praveen
it's
useful
I
was
going
to
say
that
you
said
you're
using
a
system
that
uses
this
right
now,
but
you
must
have
startup
phase,
so
we
don't
quite
know
how
it
went,
but
I
limited
that
I
was
going
to
ask
you.
If
you
had
any
any
any
traces
that
you
could
show
where
you
could
could
show
hi,
surplus,
plus
behavior.
P
It
seems
quite
promising
and
obviously
you've
done
measurements
and
we've
seen
this
you've
probably
seen
a
million
traces
when
we
super
useful
to
see
a
few
places
to
understand
exactly
what
the
dynamics
are.
I'm,
also
quite
interested
in
understanding
the
convergence
dynamics
of
icer,
plus
plus
with
other
flows
work
as
well.
F
Thanks
John,
oh
that's
great
feedback,
so
yes,
we
would
like
to
present
more
results
on
this.
We
do
have
lab
measurement
data.
We
are
also
currently
working
on
metrics
that
we
could
collect
more
data
from
the
wild
to
be
able
to
compare
actually
in
in
production
systems,
data
from
in
production
system
to
see
how
effective
this
is.
So
we
are
currently
working
on
techniques
to
do
a
very
effective
comparison
from
both
lab
measurements,
as
well
as
in
wild
measurements.
So,
yes,
I
will
try
to
get
you
that
data
by
the
next
ideas,
I.
P
Appreciate
her,
thank
you.
Much
I
was
also
going
to
ask
you
a
separate
question.
This
is
so
high.
Sod
basically
tries
to
figure
out
how
to
leave
early
and
that's
useful,
but
also
work
that
can
be
done
at
the
beginning
of
I
start.
This
is
still
continuously
all
of
these,
except
for
page
chirping.
We,
this
exponential
increase
that
happens
at
the
beginning.
Have
you
thought
about
changing
that
aspect
of
it.
F
So
exponentially
increase
actually
is
very,
very
helpful.
So
yes,
because
you're
starting
out
fresh
yeah,
you
can
use
a
higher
initial
congestion
window
and
pace,
but
even
then
for
high
bdp
links.
If
you
don't
do
exponential
increase,
then
it's
very
difficult
to
figure
out
what
the
ideal
sending
rate
should
be.
F
No,
we
have
actually
we
find
that
it's
already
too
aggressive.
That's
the
reason
like
high
start,
helps
a
lot
and
that's
why
Isis
is
also
so
effective
because,
under
the
righty
of
network
conditions,
we
find
that
this
overshoots
the
idols
and
rate
almost
all
the
time.
So
so
it's
actually
too
aggressive
already.
So
that's
the
reason
we
are
doing
high
stats,
just
a.
P
Last
comment:
I'm,
basically
I'm
trying
to
clarify
my
point,
which
is
high.
Sod
is
a
way
to
leave
exponential
increase.
What
I'm
talking
about
is
paste
shopping,
for
example,
doesn't
do
that
exponential
increase
up
front
it
does
it,
but
in
a
very
different
sort
of
way,
I'm
trying
to
figure
out
if
there
are
ways
to
combine
those
things
where
high
surplus
plus
helps
you
leave
remove
when
you
when
you
then
believe
that
you've
you've
sat
you
to
the
network,
but
you
start
off
in
a
different
way,
but.
F
Q
F
So
unfortunately
easy
and
is
not
widely
deployed
and
we
don't
turn
it
on
by
default,
so
we
don't
have
data
to
suggest
if
it's
it
will
help,
but
certainly
none
of
this
changes
response
to
congestion
signals
right.
So
if
you
get
do
get
explicit
congestion
signal,
we
will
exit
slow
start
so
and
again
we
will
exit
low
limit
or
slow
start
as
well
whenever
there's
a
condition
signal.
So
none
of
that
changes,
but
no,
we
have
not
experimented
with
ECM.
Okay.
E
Evening
I
would
comment
that
is
what
China
China
said
about
this
using
pacing
and
combining
it
with
high
stealth
and
I
see
they
are
totally
incompatible
because
I
start
realizing.
Building
clear
team
and
pacing,
on
the
other
hand,
tries
to
avoid
building
q.
So
it's
hard
to
measure
something
you
don't
build
so.
B
B
M
M
Okay,
great
I
can
see
them
and
I'll
just
say
next
is
we
need
to
move
along
so
hi?
My
name
is
Kent
Watson
and
I'm,
typically
in
the
net
continent,
mod
working
groups,
but
today,
I'm
presenting
to
TC
p.m.
a
draft-
that's
been
adopted
by
the
neck
off
working
group
for
defining
gang
groupings
for
TCP
clients
and
servers.
M
My
co-author,
Michael
Scharf
is
has
helped
me
with
this
and
I
think
it
was
adopted.
We
see
see
the
TCP
mailing
lists
during
the
adoption
poll
after
104
next
slide,
please
so.
Firstly,
it's
understood
that
within
TCP
you
know
their
simultaneous
opens,
and
you
know
peer-to-peer
protocol
there's
not
really
a
client
or
server,
but
nonetheless
here
and
we
refer
to
terms
client
and
server.
The
client
is
the
peer,
that's
initiating
the
connection
and
the
server's,
the
peer
that's
receiving
or
accepting
the
connection
having
the
open
port.
M
M
So
you
know
designing
the
gang
data
models
that
would
produce
the
configuration
hierarchy
and
in
many
cases,
translates
out
to
CLI
that
could
be
executed
on
routers
and
switches
or
firewalls
or
or
even
now
the
protocols
are
being
used
by
higher-level
applications,
and
so
it
actually
initiated
off
with
one
draft
being
the
Netcom
server
draft
and
then
quickly
folks
said:
oh,
we,
let's
also
have
not
just
neck
off,
but
we
need
to
do
rest
comp
as
well,
and
then
we
can't
just
do
servers.
We
have
to
do
clients
and
it
became
larger
and
larger.
M
My
slide,
please
just
a
little
bit
about
yang
beta
models,
I
I
think
some
of
you
may
have
a
passing
understanding
of
gang.
A
note,
certainly
popular
within
the
ietf
of
late.
But
not
everyone
is
up
to
speeds.
So
just
quickly
yang
is
an
ITF
replacement
for
SNMP
MIPS.
It
is
a
way
to
describe
data.
It's
a
data
modeling
language
yang
is
Tanaka,
Andres,
Kampf
or
likewise
XML
JSON,
as
for
instance,
and
mythos
SNMP
XS
D
is
to
XML
ASTM
dot.
1
is
to
Pender
ber
encoding
and
avian
F
is
to
binary.
M
Hopefully
that
makes
sense
next
slide.
Please!
Ok!
So
just
going
back
to
the
history
and,
interestingly,
this
slides
not
formatted
it
supposed
to
be
pretty
ASCII
our
symmetric
tree
there,
but
nonetheless
in
the
2000.
Sorry
in
the
ITF
102
time
frame,
the
net
off
work
group
had
the
following
hierarchy
of
adopted
working
group
drafts.
So
the
the
the
names
that
you
see
there
are
actually
draft
names.
You
would
prefix
each
of
them
with
draft
IETF
neck
off
and
then
the
the
name
to
get
the
full
draft
name.
M
Again,
this
is
what
we
had
in
the
one
or
two
time
frame.
The
note
on
the
right
hand,
side
is
just
mentioning
that
in
case
you're
not
aware,
the
Netcom
protocol
is
has
a
mandatory
binding
to
ssh,
and
then
then,
there's
optional
protocol
binding
to
TLS
and
the
rest
cough
protocol
has
a
mandatory
binding
to
http,
which
of
course,
is
HTTP
on
top
a
TLS.
So
well
you
don't
see
the
dependency
arrows
quite
well,
quite
right.
That's
what
they're
trying
to
illustrate
there
next
slide.
Please.
M
The
yang
models
at
that
time
defined
knobs
within
the
SSH
in
TLS
groupings
for
keeping
lives.
Of
course,
if
you're
having
I
mean
we
within
those
protocols,
there's
actually
there
were
the
standard
protocols.
Is
the
client
initiate
that
there's
would
be
that
the
controller
application
initiate
the
connection
to
the
device,
but
with
RFC
80-71
we
introduced
this
notion
of
a
call
home
which
what
really
happens
is
the
devices
initiating
the
underlying
TCP
connection
to
the
controller
application
and
then
immediately
the
protocols
flip.
M
M
This
is
critically
important
because
sometimes
the
devices
are
being
deployed
behind
maps
and
network
address
translation
type
firewalls
that
it
makes
it
impossible
for
an
external
system
to
initiate
a
management
connection
to
the
device
so
for
the
device
to
initiate
the
connection
becomes
critical
and
but
similarly
it's
necessary
in
some
cases,
for
the
device
to
be
able
to
ensure
that
the
connection
remains
up.
That
is
persistent
connection
and
hence
actively
testing.
M
The
aliveness
of
its
protocol
appear
becomes
important
and
we
felt
that
we
could
use
the
keepalive
mechanisms,
as
they're
defined
already
in
SSH
and
TLS
to
do
that.
For
us,
however,
broadband
and
broadband
forum
and
Nokia
were
implementing
this
and
they
discovered
that
TLS
keepalive
are
not
very
well
supported
in
the
open,
SSL
library
that
they
were
using.
M
Even
though
there's
an
RFC
for
defining
TLS
keep
alive
open,
SSL
has
not
implemented
it.
In
fact,
they
they
partially
implemented
it
and
then
decided
to
remove
that
partial
implementation.
So
it
may
not
get
back
into
open,
SSL,
anytime
and
nonetheless
Nokia
and
broadband
forum
were,
you
know,
asking
if
we
could
then
enable
keepalive
set
the
TCP
layer
which
seemed
reasonable.
There
is
actually
a
discussion
with
a
TSV
area,
and
you
know
Spencer,
Dawkins
and
others
participate.
M
In
that
conversation,
we
we
came
to
the
conclusion
that
supporting
keepa
lives
on
every
layer
of
a
protocol
stack
is
critically
important
and-
and
hence
there's
no
reason
to
not
enable
the
ability
to
configure
keep
a
lives
also
at
the
TCP
layer.
But
then
the
question
was
how
to
do
it
and
next
slide,
please.
So
in
the
103
time
frame
we
started
to
discuss
how
we
might
be
able
to
create
additional
protocol.
You
know
groupings
yang
groupings
for
these
other
layers.
Again.
M
The
idea
here
being
is
that
then
each
layer
having
its
own
grouping
can
define
the
configuration
for
the
keeper
lives
at
its
a
layer,
and
this
actually
is
quite
good,
because
every
protocol
has
slightly
different
ways
of
configuring
against
keep
lives
and
so
enabling
the
configuration
Beco
or
the
definition
of
the
configuration
to
be
co-located
with
each
protocol
layer
was
right
way
to
go
next
slide.
Please
that
particular
slide
the
previous
one.
M
You
don't
don't
go
to
it,
but
it
while
I
didn't
say
it
at
the
time
it
was
kind
of
representing
and
is
a
relationship
so,
for
instance,
if
you
instantiated
in
the
instance
or
the
HTTP
grouping,
it
would
in
turn
instantiate
an
instance
of
both
of
TLS
and
HTTP
groupings
and
then
in
turn
they
would
instantiate
an
instance
so
that
TCP
so
Kevin
is
our
relationship
to
use
an
object-oriented
comparison.
But
we
found
that
actually
there
has
a
relationship
was
better.
So
in
again
that
slides,
not
quite
right
but
you're
on
the
bottom.
M
There
are
sort
of
flat
lists
of
the
various
protocols
that
can
be
then
be
composed
to
represent
the
various
protocol
stacks.
So,
for
instance,
you
could
have
TCP
over
HTTP
by
just
grabbing
the
groupings
from
those
two
drafts
or,
alternatively,
you
can
have
TCP
over
TLS
over
HTTP,
just
by
inserting
also
the
grouping
from
the
TLS
draft
in
between
the
other
two,
so
the
composition
becomes
quite
nice.
So
that's
the
current.
That's
currently
what's
been
published
in
the
104
time
frame.
Next
slide.
Please.
M
And
just
took
a
little
bit
in
detail
of
what
what
the
current
draft
defines.
It
is
defining
three
gang
modules:
there's
a
common
yang
module
and
in
that
yang
module
is
defining
two
groupings,
there's
a
common
grouping
and
then
there's
the
what's
called
connection
grouping.
So
it's
the
connection
grouping
that's
being
used
by
the
other
groupings.
M
The
purpose
of
this
factoring
out
of
the
common
grouping
is
so
that
in
case
maybe
someday
in
the
future,
TCP
M
would
like
to
define
a
system
grouping
that
would
represent,
for
instance,
the
TCP
configuration
of
the
operating
system
itself,
and
so
we
have
this
common
grouping
that
right
now
the
only
thing
is
defining
or
keeper
lives.
There's
the
connection
grouping-
and
it's
only
thing
it's
doing-
is
using
that
common
grouping.
If
there's
no
additional
configuration
beyond
that,
then
there's
a
second
module
for
TCP
client,
and
you
can
see
that
it's
it's
well.
M
Actually,
it's
not
easy
to
see
on
the
slide.
I'm.
Sorry
about
that,
but
it's
a
it's
defining.
Essentially,
what
is
the
remote
port
address
and
sorry
a
remote
address
and
a
remote
port
as
well
as
optionally,
a
local
address
and
a
local
port,
and
then
it's
using
the
TCP
connection
grouping.
So
that's
where
it
inherits,
keep
your
lives
and
then
there's
a
server,
but
the
slide
is
messed
it
up
pretty
badly.
So
I
won't
try
to
speak
through
it,
but
a
very
similar
idea,
worse,
where
it's
inheriting
some
of
the
configuration
from
the
common
grouping.
M
Hopefully
the
next
slide
works.
Please
next
slide
it's
not
that
much
better.
So
it's
supposed
to
be
shown
here
is
the
flattening
of
all
those
useless
statements
from
the
previous
slide
into
the
tree.
Diagrams
and
I
think
we
might
be
able
to
see
the
top
one.
There's
the
the
client
grouping
and
you
see
the
remote
address,
the
remote
port
local
address
the
local
port,
and
then
you
can
kind
of
see
the
keeper
lives,
but
it
gets
pretty
terrible.
M
Unfortunately,
I,
that's
very
difficult
to
see
I
really
apologize
for
that.
The
what's
important
here.
Those
in
its
difficulties
is
to
see
in
blue.
Hopefully
it's
showing
blue
for
you.
The
local
bindings
supported.
That's
question
mark,
so
that's
a
feature.
It's
basically
saying
you
know:
does
this
device
support
the
notion
of
allowing
the
local
address
the
local
port
to
be
configured
at
all
or
is
it
you
know
for
the
client,
for
instance,
or
is
it
always
a
hopper
system
that
can
wild
card?
M
You
know
randomly
select
what
the
local
address
and
local
port
should
be
based
on
routing
parameters
or
wildcards?
Also
and
I
know
you
can't
see
it,
but
in
the
keeper
lives,
there's
the
idle
time
and
probe
interval.
Both
of
these
are
currently
you
at
16s
and
the
resolution
of
those
are
seconds
so
I
thought
actually
Michael
thought
that
this
would
be
interesting
for
the
working
group
to
discuss
if
that
resolution
was
correct
and
accurate,
as
opposed
to
say
70
seconds
or
milliseconds.
M
M
Okay,
so
this
here's
an
example
and
XML
is
being
shown,
but
with
anything
that
you
do
with
Yang,
you
can
also
simultaneously
support
JSON
and
in
fact,
there's
another
working
group.
Cosi
that
implements
Seaborg,
which
is
a
binary
representation,
of
instance,
data
that
conforms
to
the
gang
data
bottles,
but
in
this
example,
at
the
very
top
at
least
we
can
get
through
some
of
it.
There's
a
remote
address
and
notice
here
it's
actually
using
a
host
name.
So
on
the
previous
slide,
was
every
question.
M
I
thought
I
heard
some
feedback
on
the
earlier
slide.
It
defined
this
remote
address
type
as
what's
known
as
an
eye
net
host,
which
then
allows
for
values
as
either
either
being
domain
names
or
IP
addresses
whether
or
their
ipv4
or
ipv6.
So
in
this
example
of
showing
that
it's
actually
a
hostname,
whereas-
and
you
can't
see
it
very
well
but
down
below
in
the
server
it.
M
So
this
is
an
eye
chart
we
won't
try
to
go
into,
but
the
wasn't
trying
to
be
impressed
upon
you
is
that
at
the
very
top
in
the
middle
is
the
TCP
common
grouping
and
then
off
to
the
left
is
the
tcp
client
and
off
to
the
right,
is
tcp
server
and
then
there's
sort
of
a
hierarchy
of
SSH
and
TLS
and
HTTP
modules.
Sorry
groupings
that
are
that
are
dependent
on
them
also,
and
then
you
know
mostly,
this
is
feeding
into
the
SSH.
M
M
So
there's
sort
of
what
I
guess
I'm
trying
to
impress
upon
you
is
that
a
strategy
that's
been
put
forth
in
this
particular
draft
of
having
these
three
modules
representing
a
common,
a
client
and
a
server
that
that
strategy
is
it's
part
of
a
framework
and
the
framework
is
rather
robust
and
it's
been
shown
to
you
know,
be
able
to
support
the
composition
of
some
fairly
complex
configuration
hierarchies.
That's
the
main
point
of
this
slide
next
slide.
Please,
okay,
and
this
is
also
the
last
slide.
M
So
the
next
steps,
before
the
authors,
at
least,
and
also
from
the
net
coughs
working
groups
perspective,
at
least
what
our
primary
intentions
are
for,
supporting
the
configuration
of
net
cough
clients
and
servers.
We
think
we're
done.
Of
course
you
know
we'd
be
willing
to
sort
of
allow
the
certain
functionality
or
the
feature
set
to
be
published
as
an
initial
RFC
and
would
hope
that
T
CPM
actually
would
take
over
the
you
know
any
future
updates
to
that
RFC
using
business.
M
You
know
business
updates
as
they
would
see
fit,
but
but
before
we
can,
you
know
think
to
do
a
last
call
or
anything
like
this.
We
want
to
first
verify
with
T
CPM,
whether
or
not
the
model,
even
as
simple
as
it
is,
if
it's
general
and
good
enough
for
all
the
situations
and
which
TCP
yang
models
could
be
used
and
I,
guess
that
I'll
leave
that
question
to
the
room
or,
if
there's
any
other
comments
or
concerns
that
it'd
be
happy
to
hear
them.
Thank.
H
Mathis
the
thing
that
the
only
thing
that
bothers
me
about
this
I
mean
the
yang
stuff
is
such
seems
to
be
fine,
but
this
group
spends
a
great
deal
of
effort
making
TCP
idiot-proof
in
the
sense
that
the
API
doesn't
expose
things
that
people
might
think
they
could
improve
and
therefore
break
things
and
and
keep
allies
fell
into
that
category.
You
generally
generally
people
who
want
to
muck
with
it,
don't
understand
what
they're
doing
and
they
actually
don't
want
to
muck
with
it.
H
Now
there
are
some
exceptions,
but
in
places
most
places
many
of
the
places
where
people
come
up
with
exceptions
that
be
careful
because
the
ietf
is
huge,
for
example,
in
BGP
at
one
point
TCP
live,
this
was
used
to
indicate
BGP
liveness,
and
it
was
realized
that
they
needed
to
have
a
separate
connectivity,
instrumentation
protocol
and
not
rely
on
TCP
notion
of
liveness
for
B,
gp's,
liveness
and
and
I
suspect.
This
we've
been
very
careful
here
about
that
kind
of
thing.
B
Can
you
stay
here,
mycotoxin
from
the
floor,
reading
the
description
of
keeper
life
so
you're
specifying
a
time
and
after
which
the
TCP
connection
gets
dropped?
If
there
is
no
response
from
the
pier
its
idle
time
time,
60
I,
don't
know
whether
60
comes
from
the
very
put
in
is
in
seconds.
The
outcome
is
in
seconds,
though
I
don't
know
and
the
other
one
is
isn't
the
it
does.
H
H
H
The
the
design
of
the
keep
alive
is
not
to
determine
if
the
network
is
is
healthy.
It's
to
make
sure
that
connections
which
are
completely
passive,
where
the
other
end
goes
away,
eventually
get
reaped.
It
was
solving
that
problem,
not
determining
whether
or
not
the
network
is
healthy,
and
so
sailors
turn
on
keep
alive
to
make
sure
that
if
the
client
just
evaporates
and
the
server
has
no
pending
data,
the
server
eventually
dies.
It's
it's
not
the
usual
liveness
question
that
people
think
about.
M
Okay,
so
just
to
respond
to
a
couple
of
comments:
I've
just
made
I
didn't
go
over
cuz.
The
picture
was
rather
garbled
on
the
screen,
but
there's
also
a
feature
statement
for
whether
or
not
the
server
supports
keeping
lives
at
all
which,
on
a
client
or
server
basis,
key
enable
are
disabled
so
on
a
per
implementation
basis.
If
they
don't
want
to
support
those
configuration
knobs,
they
can
completely
disable
them
and
they
won't
be
available
for
users
to
configure.
M
Secondly,
there's
the
ability
you
can
keep
alive,
as
doesn't
defined
as
what's
called
a
presence
container,
which
means,
if
you
just
put,
keep
you
eyes,
but
don't
configure
any
of
those
three
knobs
underneath
then
the
default
values
kick
in,
so
there's,
hopefully
same
default,
values
and
again
people
wouldn't
muck
with
them
and
get
themselves
in
trouble.
Lastly,
that
description
statement
times
60
it
maybe
I
need
to
revisit
that,
and
it
could
be
wrong,
especially
in
the
light
of
exponential
back-off.
K
Miracle,
even
just
to
answer
a
couple
of
things,
so
people
life
is
actually
something
that
comes
up
very
often
in
people
can
do
wrong
because
you
can
configure
instead
of
the
60.
Second,
you
can
configure
I
want
to
keep
a
life
one
minute
once
per
millisecond
right
and
that
breaks
your
network.
So
it's
really
something
that
you
should
be
careful
about.
There
is
an
exponential
back-off.
That's
when,
like
you
or
keepalive
fails
you
don't
get
a
reply,
then
you
exponential
back-off,
but
if
it's
running
everything
is
fine.
K
C
C
So
actually
it
would
have
been
good
to
have
the
presentation
that
you
have
just
seen
in
the
last
IDF
meeting,
because
this
would
have
simplified
a
lot
and
there
was
a
little
bit
my
mistake
that
this
happened
didn't
happen.
So,
but
in
general
we
see
that
there
is
this
trend
in
certain
other
working
groups
that
people
feel
a
need
to
configure
certain
parameters
in
yang,
most
notably
the
things
that
you
can
configure
by
a
socket
options
and
the
keeper
lives
are
actually
one
such
example.
There
are
other
ones
as
a
example.
C
There
is
at
least
one
yang
modular
out
there
that
configures,
the
MSS,
which
is
also
something
that
some
snakes
exposed,
for
example,
by
a
socket
option.
So
you
see
TCP
parameters
in
in
yang
modules
increasingly,
and
that
somehow
raises
the
question
to
us
as
TCP
and
working
groups.
Or
what
do
we
do
with
that?
And
is
there
a
bigger
problem
to
work?
Resolved
and
I
explicitly
raised
this
as
an
open
question,
because
I
believe
that
we
should
discuss
this.
C
I
have
not
fully
came
to
an
conclusion,
but
in
general,
if
we
look
at
the
TCP
stack,
we
do
not
only
have
the
parameters
that
are
exposed
by
a
socket
option,
but
obviously
we
also
have
configuration
parameters
and
typically
that
this
is.
It
is
subdivided
into
a
global
configuration
of
the
stacks,
such
as
whether
you
enable,
for
example,
is
yen
or
SEC,
and
in
many
states
you
also
have
interface
specific,
commit,
for
example,
something
about
the
MSS
or
maybe
relate
to
the
MTU
or
other
of
loading
related
parameters.
C
C
That's
not
a
big
surprise
stacks
that
support.
For
example,
the
section
typically
have
boolean
or
similar
to
a
boolean
parameter
to
turn
that
on
and
off
and
the
same
applies
to
other
standard
optional
TSP
functionalities
such
as
timestamps,
for
example,
path,
MTU,
discovery,
Zn
and
there
couple
of
other
other
parameters.
So
you
see
that
there
is
some
comment
and
the
commonality
in
what
you
can
turn
on
and
off,
for
example,
of
course,
the
exact
way
how
you
do
that
so,
for
example,
that
it's
a
boolean
parameter
or
whether
it's
an
enumeration.
C
C
We
could
decide
to
propose
a
young
model
that
includes
those
parameters
such
as
sex,
so
I've
shown
here
one
such
example,
so
we
could
be
fine,
for
example,
a
boolean
parameter
in
yang
for
enabling
or
disabling
and
SEC,
and
then
there
would
be
an
standard
way
in
Yang
and
for
having
that
configuration.
The
benefit
of
doing
this
would
be
that
this
could
be
used
in
various
other
yang
modules,
as
opposed
for
devices
or,
for
example,
also
and
other
protocol
environments
as
a
test
would
be,
for
example,
one
such
profile,
but
this
could
be
useful.
C
It
could
reference
those
standard
definitions
if
they
feel
that
there
is
a
need
to
to
deal
with
such
TCP
configuration
and
parameters,
and
the
proposal
here
would
be
just
to
define
the
grouping
similar
to
what
you
have
seen
just
in
the
previous
talk.
So
we
would
not
define
a
full
data
model
here.
We
would
only
define
groupings
for
certain
parameters
and
because
this
allows
to
reuse
defensing
in
a
lot
of
different
use.
Cases.
C
I've,
looked
at
other
parameters
there
beyond
what
I've
shown
in
the
previous
table,
the
other
ones
that
are
often
available
in
sticks,
but
the
exact
way
how
its
configured
starts
to
them
depend
on
the
different
sticks.
So
you
can
see
examples
like
the
delay
back
time
or
the
initial
Archie
own,
or
some
very
basic
parameters
of
the
retransmission
engine
are
often
configurable
in
in
many
stacks.
C
But
the
details
start
to
be
different
so
and
if
you
look
at
the
ended
in
individual
a
how
to
configure
that
each
stack
has
found
a
different
way
to
configure
that,
of
course,
as
in
those
cases,
we
could
come
up
this
one
reasonable
way
to
model
it,
but
the
risk
that
this
does
not
have
a
one-to-one
mapping
to
the
specific
configuration
of
a
stack
gets
higher
and
then,
of
course,
there
are
huge
areas
of
a
TCP
stack
that
are
very
specific
to
the
implementation.
Examples
of
wakes
and
everything
that
deals.
C
This
puffers
always
flow
controls
that
a
lot
of
heuristics
out
there
and
that
are
typically
very
specific
to
the
stake.
Also,
for
example,
debate
how
congestion
control
algorithms
are
implemented.
Obviously,
very
specific
to
this
text
and
it's
very
hard
to
come
up
as
a
common
set
of
parameters,
so
at
the
moment
I
don't
try
to
enter
that
space
because,
yes,
you
huge
risk
to
boil
the
ocean
and
going
there
and,
as
I
said
this,
what
I've
shown
on
the
slide
is
what
I
promised
to
do
in
the
last
meeting
here.
C
H
Was
gonna
make
a
comment
on
your
last
slide
about
the
fact
that
horse
takes
over
all
right?
What
we
did
in
I'll
see
gosh
I
forgot
the
number
the
the
tcp
extended
information
mid.
Is
the
introduction
specifically
says
that
if
you
have
precise
knowledge
about
how
the
implementation
works,
you
can
understand
precisely
how
the
variables
are
defined,
but
the
variables
are
defined
in
ways
they're
sort
of
generic.
C
C
Okay,
anyway,
as
I
said,
what
I've
done
here
is
what
I
promised
last
meeting
so
as
I've,
basically
completely
rewritten.
The
document
I've
also
added
a
new
cause,
so
I'm
looking
for
co-authors,
who
have
a
background
in
that
space,
and
we
shall
belong
here
to
help
me
this
that,
if
you
look
at
the
model
right
now,
you
will
see
it's
not
an
actual
yang
module.
C
C
Well,
at
the
moment,
they're
not
struck
because
young
models,
typically
our
standards
track,
I've,
said
last
meeting.
Of
course,
T
CPM
has
pretty
high
bars
for
standard
strike.
That
is
definitely
a
fun
thing
to
think
about
a
few
standard
strike,
because
that
would
be
the
default
in
the
rest
of
the
idea
for
your
modules.
But
yes,
of
course,
I
mean
standard,
striking
TCB
and
has
a
certain
implication,
and
that
is
definitely
something
we
would
have
to
think
about.
Michael
Jackson,
my.
L
C
I
And
I'm
going
first
I'm
going
to
be
talking
about
a
draft.
What
jeers
draft
fare
has
t
sv,
w
g
CC
and
y
TS
v
WG,
because
it's
supposed
to
refer
to
all
transport
protocols?
Why?
Here?
Because
T
CPM
has
clue
about
how
to
design
congestion
control,
which
is
what
CC
means.
So
let
me
tell
you
a
little
bit
next
slide.
I
So
this
is
not
of
you
and
you
probably
noticed
the
IETF
has
said
quite
a
bit
about,
can
just
control
over
the
years.
So
it's
clear
if
you
want
to
design
something
that
just
congestion,
control
or
maybe
doesn't,
which
document
to
look
up.
Isn't
it
no
well?
I
spoke
to
John
at
the
quick
interim
and
we
thought
yes.
The
answer
to
this
must
be
true.
There
must
be
a
couple
of
documents
we
could
just
point
to
and
we
can
just
write
quickly.
I
A
four-page
summary
of
these
are
the
key
principles,
not
the
details,
but
the
key
things
you
must
do
if
you
design
a
congestion
control
and
then
we
could
write
this
document,
it
was
so
short
that
maybe
mean
we
don't
have
to
publish
it.
We
can
just
include
it
various
places
and
say
play
well
and
do
this.
They
took
more
than
four
pages
when
I
tried
one
more
time,
so
I've
done
some
work
and
that's
what's
in
this
particular
internet
draft
and
the
draft
revision
started
with
zero
zero
I
just
kind
of
consumed.
I
I
thought
maybe
there's
something
useful
here,
so
I
wrote
0
1,
which
was
the
same
text,
made
more
coherent
by
getting
other
people
to
read
it
and
also
find
me
leaving
it
for
a
while
I
think
way,
shorter,
so
I
thought
oh
yeah,
we're
onto
something
good
here
and
then
I
thought
actually
I
resent
this
at
the
ITF,
which
is
where
we
are
now
and
there's
still
a
few
annoying
typos
I
have
roles
in
my
xml
version.
I'd
be
happy
to
fix
those.
If
people
think
this
is
useful.
So
what
is
it
so
section?
I
Three
of
the
document
is
the
first
bit
where
there's
important
stuff.
It
talks
about
the
principles
of
congestion
control,
if
you're
being
in
the
ITF
for
a
while.
You
remember,
there's
a
BCP
written
by
this
on
this
by
Sally
about
20
years
ago,
and
that
probably
isn't
the
best
document
to
tell
somebody
how
to
run
the
internet,
but
it
might
be
so.
Let's
try
and
write
down
some
principles.
I
picked
three
major
principles:
there's
a
diversity
of
path,
characteristics
out
there
so
be
aware
of
that.
I
When
you
design
something
there's
something
where
flaws
are
multiplexed
and
congestion
happens,
and
that's
important
to
know
about,
and
it's
important
to
avoid
congestion
collapse,
which
is
where
Sally's
main
point.
This
is
the
real
thing
that
Turley
has
to
be
avoided
and
the
second
one
is
a
little
bit
about
understanding
but
starving
or
the
floors
out
about
how
your
portion,
the
bandwidth
between
them.
But
there
are
things
in
there.
We
have
to
read
this.
I
Well,
ok,
my
basic
thing
when
I'm
standing
here
is
how
I
done
it
and
should
I
just
and
file
that
and
walk
away
from
it.
Or
does
anybody
else
read
this?
Would
they
like
to
read
it?
Do
people
care
enough
that
we
should
actually
update
this
and
write
this
as
a
proper
document,
I'm
happy
to
go
through
everything
we
have
at
the
moment
as
input
and
work
with
other
people
to
make
this
into
a
real
guidance
document?
Do
people
care
who
would
help
me
or
I
can
happily
write
other
things
so.
R
R
Q
I
N
I
And
the
idea
that
you
have
to
multiplex
and
the
idea
you
had
not
to
starve
I'd
mentioned
that
roaches
can
do
things
to
help
you
achieve
these
goals.
I,
if
I
didn't
mention
furnace,
it
was
because
I
didn't
find
RFC
keywords
mixed
in
with
the
furnace
that
I
thought
had
to
be
in
here,
but
that
is
the
sort
of
thing
which
we
should
talk
about
because
I'm
it's
important
to
say
something.
Probably
we
should
say
something
about
fairness,
even
if
it
says
something
that
isn't
a
must
or
should
do
it.
D
I
The
quick
intern
and
I'm
trying
to
think
about
how
strong
can
quick
actually
say
you
do
any
congestion
control
that
you
like
providing
it's
reasonable
and
then
what
the
heck
is
reasonable.
So
I
don't
want
to
Bible
about
how
to
design
congestion
controls.
But
I
would
like
to
have
some
kind
of
reasonableness.
F
I
H
P
P
I
am
happy
to
see
this
work
happened,
I'm,
happy
to
review
and
more
than
to
be
I'm
happy
to
help
by
contributing
text
and
by
helping
shape
the
I
I
agree
that
I,
maybe
I,
didn't
understand
Bob
correctly,
but
I
actually
don't
want
a
checklist
in
the
sense
that
checklists
are
going
to
be
in
exhaustive,
they're,
not
gonna,
be
exhaustive
and
that's
they're
failing
in
some
ways.
It's
like
it's
never
going
to
be
exhaustive.
P
So
what's
the
point
of
having
a
checklist,
if
I
wanna
have
considerations,
that's
useful
to
me
in
this
sort
of
a
document,
maybe
this
could
lead
to
test
Suites
and
other
things
that
actually
implement
checklists.
No,
not
that
I
we've
done
this
before
I.
Definitely
don't
want
to
instigate
that
work
here
again,
but
but
beyond
that,
I
think
it's.
It's
super
useful
as
a
way
to
think
about
what
is
reasonable,
I
think
that's
a
conversation.
We've
not
really
had.
We've
had
conversations
about
perfectly
fair,
but
not
what's
reasonable,
so.
H
B
I
B
S
My
name
is
Carlos
Gomez
I'm
from
UPC
and
I'm.
Going
to
present
this
new
draft
entitled
TCP
at
poor
micro
farad
is
Jonker
craft
from
the
University
of
Cambridge.
So,
first
of
all,
let's
go
through
the
motivation
for
this
new
document.
It
is
well
known
that
the
latex
allow
reducing
packet
overhead
and
some
conditions.
However,
it's
also
true
that
the
latex
may
be
detrimental
to
performance
in
some
scenarios.
S
However,
the
memory
resources
cannot
be
released
until
the
react
which
is
delayed,
arrives
and
due
to
this
delay,
there
might
be
problems
if
there's
not
much
memory
available
which
might
even
lead
to
packet
drops
if
subsequent
packets
need
to
be
sent.
Meanwhile,
also
IOT
devices
run
typically
on
simple
energy,
limited
sources
such
as
simple
batteries
and
depending
on
the
technique
used
by
the
device
for
energy
conservation.
For
example.
If
the
device
uses
a
radio
interface,
the
device
might
stay
with
the
read
interface
on
awaken
consuming
energy
while
awaiting
the
arc
which
is
delayed.
S
So
the
delay
contributes
to
increasing
the
energy
consumption
and
decreasing
the
lifetime
of
the
device.
In
addition,
the
delay
might
interact
negatively
with
layer
2
mechanisms
in
some
IOT
technologies,
whether
some
specific
opportunities
for
transmission
and
reception
of
packets,
and
if,
due
to
the
delay
some
opportunity
is
lost.
It
means
that
the
next
opportunity
will
only
arrive
after
some
longer
delay,
which
then
in
turn
exacerbates
the
previous
two
issues.
S
So
we
might
want
to
consider
solutions
and
we
might
think
ok
can
we,
for
example,
try
to
disable
delay
tags
at
the
receiver
if
at
all
possible,
and
the
answer
is
that
perhaps
no
that's,
probably
not
a
good
solution,
because
the
receiver
may
interact
with
a
variety
of
devices
and
the
attacks
may
still
work
as
intended
in
many
connections
and
even
considering
a
specific
sender.
Perhaps
the
sender
may
offer
a
mixed
traffic
pattern
where
the
attacks
may
work
as
intended
for
some
part
of
that
traffic.
S
S
As
the
active
flag,
this
would
be
more
specifically
the
sixth
date
of
the
13th
bite
of
the
TCP
header
and
the
mechanism
would
be
as
follows:
the
sender,
if
the
sender
wants
to
request
an
immediate
AK
for
DT
segment,
then
it
sets
the
at
poles
like
and
on
the
receiver
side,
upon
reception
of
a
data
segment
with
the
approach
lock
set
and
the
receiver.
If
it
conforms
to
this
specification,
it
must
send
the
ACK
immediately.
There's
a
question.
Okay,
then,
after
the
internet
graph
submission
deadline,
there
was
some
nice
discussion
on
the
mailing
list.
S
There
was
good
feedback,
many
good
comments
that
were
sent
on
the
list.
I
would
like
to
thank
everyone
for
that,
and
I've
tried
to
summarize
the
feedback
on
this
slide.
So
one
point
is
that,
of
course,
one
of
the
TCP
header
reserve
bids
is
an
expensive
resource
and
we
are
well
aware
of
it.
So
that
means
that
we
need
to
carefully
assess
the
pros
and
cons
of
this
proposal
or
some
perhaps
some
alternatives,
then
also
the
use
of
the
maximum
AG
delay
option
was
suggested.
However,
with
that
option
we
understand
there
might
be
two
issues.
S
One
is
that
it's
a
an
option
that
would
not
be
a
solution
offering
sorry
a
solution
at
the
second
granularity
and
on
the
other
hand,
this
option
entails
another
head
of
28
bits
which
might
be
not
so
good.
In
some
scenarios,
such
as
IOT,
then
there
was
another
comment
or
suggestion
like
perhaps
we
might
redefine
the
push
flag
as
having
the
octal
semantics.
This
could
be
phrased
as
a
tcp
me
not
delay
acts
for
data
segments
with
the
push
flag,
which
in
principle
would
be
allowed
by
RFC
1122.
S
Therefore,
perhaps
having
a
separate
attack
pool
flag
would
allow
having
both
things
with
the
law
using
the
latex
in
those
cases
where
it's
useful
and
also
separately
supporting
the
ability
of
requesting
immediate
act
when
needed.
And
finally,
there
was
another
point
made
that
selfish
devices
might
want
to
always
use
this
mechanism.
S
Therefore,
we
might
need
some
clear
rules
or
recommendations
on
when
this
should
be
used
or
would
be
allowed.
Finally,
there's
a
security
consideration,
at
least
one
to
be
done
here,
which
is
that
there's
a
possible
do
s
attack
as
a
result
of
this
at
pull
mechanism,
especially
on
resource
constrained
receiver.
Goodbye,
an
attacker
may
send
a
large
number
of
messages
to
the
victim
note
requesting
an
immediate
attack
for
each
in
order
to
contribute
to
depleting
resources
from
the
device,
for
example,
energy
resources.
R
Questions
yes,
so
when
I
read
this
draft,
it
was
not
quite
clear
to
me
what
the
exact
about
the
exact
exact
problem
space
was
that
you
want
to
address.
So
you
mentioned
IOT
and
the
other.
The
other
aspect
that
I
was
missing
was
next.
We
do
have
a
couple
of
mechanisms
like,
for
example,
tea,
sack
and
TLP,
which
could
be
exploited
to
exactly
this.
This
point
unilaterally.
R
S
Thank
you,
I
will
definitely
look
at
the
mechanism
suggested
and
regarding
the
problem
space
actually
there's,
there
seems
to
be
a
problem
in
different
domains
and
there
are
two
examples.
I
mentioned.
One
is
the
IOT
another
one
is
height
high
bitrate
environments.
Perhaps
there
are
some
other
scenarios,
so
this
is
perhaps
some
general
problem.
I.
D
Was
exactly
gonna
say
that
I
think
this
is
more
general
problem,
as
we
haven't
got
much
time,
I'll
leave
it
there,
but
support
worker
in
this
face.
But
things
like
act
right
control
from
the
sender
as
well
is
it
could
could
be
folded
into
some
noise?
So
maybe
we
need
to
start
working
on
requirements
for
this
before
we.
C
H
This,
the
really
tough
part
of
it,
is,
in
addition
to
sort
of
having
a
better
requirements
statement,
is
actually
getting
data,
there's
a
parallel
problems
in
both
SCTP
and
quick
and
partials
partial
solutions.
It
turns
out
one
of
the
things
that
one
of
the
lessons
from
quick
is
that
this
actually
matters
a
lot
in
certain
parts
of
the
internet
and
for
certain
applications
having
that
data,
rather
than
speculating
about
why
why
it's
important
will
change
this
conversation?
H
B
You
Michael
took
some
from
the
floor
I'm
not
that
much
concerned
about
the
security
issue
you
raised
because
as
an
attacker,
I
can
also
always
send
you
to
placate
data
and
I
get
sex
for
each
one.
Giving
you
a
how
you
can
elicit
an
a
community.
Just
read:
well
duplicate,
Li
transmitted
and
out
by
it
again
it
costs
you
one
bite
more
and
you
get
an
AK
okay.
I
S
P
S
F
Just
point
that
the
attack
time
odds
are
reduced
now
in
existence.
It's
no
longer
200
milliseconds,
it's
down
to
like
40
or
50,
so
the
impact
of
a
delay,
direct
remote
is
less
now
compared
to
before
the
second
was
yeah.
So
if,
if
the
receipt
wants
to
indicate
the
hinge
to
the
sender
that
there
is
no
subsequent
package
sent
right,
so
in
that
case
could
make
you
as
a
hint
rather
than
a
forcing
mechanism.