►
From YouTube: IETF104-TCPM-20190325-1350
Description
TCPM meeting session at IETF104
2019/03/25 1350
https://datatracker.ietf.org/meeting/104/proceedings/
A
Okay
and
I'd
like
to
welcome
all
of
you
to
the
tea
CPM
session
here
at
this
ITF
meeting.
First
of
all,
for
those
of
you
who
don't
know
me,
my
name
is
Michael
sharp
and
next
to
me,
is
Michael
Jackson,
and
is
that
I'd
like
to
start
today's
session?
Is
the
with
the
usual
note?
Well,
so
this
is
not
the
first
session
today,
so
I
assume
that
all
of
you
know
that
this
is
in
IETF
working
group.
So
everything
that's
done
here
is
covered
by
the
node.
A
A
A
Okay,
we
have
one
okay
thanks
a
lot
I
mean.
Hopefully
you
use
the
ether
pet,
as
I
will
also
too,
as
far
as
I
can,
for
example,
pay
attention
to
the
ether
pet.
But-
and
maybe
you
can
put
this
in
a
collaborative
style,
but
anyway
we
do
need
note-taking.
I
will
monitor
myself
the
meet
echo,
but,
of
course,
as
it's
something
shows
up
there
we
be
well.
We
appreciate
if
somebody
else
can
also
rely
that
do
we
have
another
volunteer
for
Chapas
crack.
A
We
broaden
the
agenda,
so
this
is
what
has
been
sent
out
before
the
meeting.
So
our
proposal
is
first,
of
course,
to
do
the
usual
status
update.
Then
we
have
two
presentations
for
working
group
items:
first,
olivier
on
the
colorado
traffic
and
ten
million
pop
on
equities,
Ian
and
easy
and
plus
plus,
and
then
we
have
today
quite
a
couple
of
other
presentations
that
are
not
at
all
about
working
through
items.
A
Okay,
that's
not
the
case,
so
then
I
will
move
forward
this
or
usual
status
update.
So
first
of
all
we
have
one
recent
RFC.
It's
the
alternative
back
off.
It's
only
one
recent
RFC,
but
we
have
a
couple
of
documents
that
are
getting
closer
to
completion
in
the
working
group.
So
hopefully
in
the
next
meetings
this
list
will
I
have
a
couple
of
more
entries.
A
Then
I'd
like
to
briefly
go
through
all
working
group
documents
for
the
first
three
bonds.
We
will
have
presentations
here
so
I
will
not
and
say
too
much
about
them.
So
we
will
see
a
presentation
on
the
converter
trough,
which
has
been
recently
updated.
We
will
have
status
updates
on
accurate,
easy
and
a
briefly
on
the
generalized
ecn
or
recent
plus
plus
so
I
will
also
postpone
the
discussion
on
that
one
too,
with
the
actual
presentations.
A
A
A
In
any
case,
having
expired
working
group
documents
is
not
a
good
stages
for
such
a
document,
so
hopefully
we
get
resubmissions
of
this
document
for
the
right
one.
Definitely
we
have
to
have
a
discussion
if
that
document
is
in
in
a
stable
state
and
if
it
can
move
forward,
but
the
first
thing
for
that
tribute
happen,
is
that
we
need
an
active
document.
A
The
second
document,
II
do,
has
not
changed
the
status
recently
and
so
I
assume
they're
still
ongoing
testing
effort
on
basically
the
question
whether
that
scheme
can
be
implemented
and
what
are
the
potential
issues
in
this
implementation
and
then,
last
but
not
least,
we
have
te
793
pitch
document.
This
is
all
the
document
that
has
recently
been
updated
and
I've
put
here
on
the
slide
at
a
comment
from
West
who
believes
that
the
document
is
getting
into
a
pretty
decent
shape.
A
We
will
not
discuss
this
document
in
this
meeting,
but
we,
the
church,
are
discussing
with
the
editor
on
what
to
do
possibly
in
one
of
the
next
meetings,
as
the
document
is
moving
forward
and
I
just
like
to
point
out
that
in
his
last
email
to
the
mailing
list,
Wes
has
flagged
a
couple
of
sections
in
the
most
recent
version
of
the
document
and
he
is
looking
for
feedback
on
those
sections.
So
please
have
a
look
at
those
questions.
A
We
recently
have
performed
the
working
group
at
optional
call
on
the
21:40
bees
document
and
the
working
group
adoption
call
has
run
until
Friday
last
week,
so
we
didn't
get
a
lot
of
feedback,
but
there's
also
been
no
pushback
on
working
group
adoption.
So
the
understanding
of
the
chairs
is
that
there
is
consensus
in
the
working
group
and
to
adapt
on
this
document.
A
Okay,
see
no
comments,
so
the
consensus
on
that
this
to
adopt
it
as
a
tea,
CPM
working
group
item,
and
then
this
is
my
last
slide.
This
is
just
a
heads
up
on
a
document
that
is
not
in
TCP
M,
but
we
have
repeatedly
presented
the
document
here
in
TCP
M
Templi,
because
it's
quite
related
to
our
work
here.
A
It's
the
document
on
the
constraint,
node
networks,
so
this
document
has
been
presented
in
detail
last
meeting
and
this
is
triggered
quite
a
bit
of
excellent
reviews
after
the
last
presentation,
and
also
specifically
I'd
like
to
highlight
that
we
got
a
pretty
good
list
of
feedback
comments.
First
word:
we
work
on
addressing
these
comments.
There
will
be
some
technical
changes
in
the
document
to
address
all
the
comments
that
have
been
made
and
assume.
The
main
author
will
send
a
summary
of
the
changes
to
the
list,
but
other
than
that
the
feedback
in
general
seems
positive.
A
So
the
authors
believe
that
this
document
is
getting
pretty
close
to
working
group
last
call,
and
so
that's
why
I
asked
the
working
group
to
carefully
review
the
most
recent
version
before
it's
not
a
tspn
document,
but
it
has
been
agreed
in
the
past
that
there
will
be
a
joint
working
group.
Last
call
in
the
library
I
be
working
like
that
IG
working
group
and
TCP
M.
So
we
will,
you
will
see
traffic
online
tests
on
the
mailing
list
and
that
is
actually
on
the
last
slide
in
the
chair
part
also.
B
B
Transport
between
the
client
and
the
converters
and
the
motivation
for
this
work
is
to
allow
a
TCP
connection
to
be
established
to
a
proxy
so
that
you
can
learn
whether
a
remote
server
supports
a
given
TCP
option
or
not,
and
the
menu
CSS
multipass
tcp,
so
that
you
can
get
multipass
tcp
on
the
access
network
through
the
proxy
and
then
learn
whether
the
final
destination
supports
and
p
tcp
or
not.
Next
slide.
Let
me
just
show
you
how
it
works
so
next
slide.
B
So
the
client
sends
a
scene
to
the
converter
to
connect
to
a
specific
server
and
this
the
converter
message
is
encoded
as
a
TLD
in
the
scene,
payload
and
the
converter
will
send
a
syn
to
the
server.
The
server
will
replied
next
slide
and
then
the
converter
can
confirm
next
slide.
The
establishment
of
the
connection
by
using
a
TLD
information
in
the
payload
to
confirm
that
the
connection
to
the
remote
server
has
been
established
and
then
next
slide.
B
At
this
stage
we
have
a
transparent,
byte
stream
through
the
converter
between
the
client
and
the
server
in
post
direction
and
next
slide.
So
what
are
the
main
changes
since
the
last
presentation
of
this
work?
We
got
feedback
from
different
implementers
and
asked
to
tweak
us
to
tweak
a
bit
the
protocol.
B
So
in
the
previous
version
we
relied
on
TFO,
but
we
know
that
here
who
has
issues
in
some
deployment
use
case,
and
what
we
did
is
that,
instead
of
using
the
TFO
option
to
encode
a
cookie
to
secure
the
connection
between
the
client
and
the
server,
we
move
the
cookie
to
the
user
space
by
using
a
specific
TLD.
So
the
same
functionality
is
there,
but
it's
not
using
TCP
options
anymore
next
slide.
So
let
me
show
you
how
the
cookie
works,
so
next,
so
the
clients
and
the
scene
with
the
connect
message
to
the
converter.
B
The
converter
does
not
have
because
there
is
no
cookie
inside
the
scene,
and
so
what
the
converter
will
do
is
that
it
will
compute
a
cookie
and
a
way
to
compute.
A
cookie
could
be
like
we
do
compute
the
cookies
in
TFO.
So
by
using
the
hash
of
the
IP
source
address,
then
you
can
return
the
cookie
in
the
simple
sock
with
an
error
message
that
indicate
the
cookie
that
the
client
should
use
and
then
next
slide.
So
we
can
restart
the
connection
again.
A
syn
we
encode
the
cookie
that
was
written
by
the
converter.
B
Now
the
converter
can
check
the
cookie
by
doing
again
a
hash
computation,
for
example,
but
there
are
other
possibilities
and
then
the
cookie
next
ID
can
establish
the
scene
and
then
next
slide.
We
get
the
response
from
the
server
and
next
slide.
You
get
the
simple
nak
Kozak
and
the
connection
is
established
as
before.
A
next.
C
B
Then
you
key,
they
are
also
used
edges
where
the
client
would
send
a
cookie
next
slide.
So
you
send
the
scene
with
an
invalid
cookie,
and
this
can
be
checked
by
the
converter
and
the
converter
can
return
an
error
message
saying
that
this
cookie
was
not
authorized
because,
for
example,
the
cookie
has
expired
on
the
on
the
converter
and
the
converter
wants
to
get
a
new
cookie
from
the
client
to
reopen
ticket.
B
The
client
owns
the
IP
address
from
which
its
received,
so
it's
exactly
the
same
stuff
as
what
we
do
in
Tier
four
by
using
TCP
options,
except
that
you
do
that
inside
the
payload.
So
we
can
have
a
cookies
that
are
much
larger
than
the
ones
that
we
use
in
the
TF
auction.
The
main
use
case
for
this
control
to
draft
is
to
support
multi
by
CCP
in
access
network.
B
B
We
will
see
what
happens
when
the
server
supports
multi
pass,
TCP
and
next
slide.
So
again
we
send
us
in
next
slide.
We
send
it
to
the
server.
The
server
next
slide
will
tries
with
the
NP
capable
option
and
then
the
converter
will
return
in
the
extended
TCP
header
TLV.
The
NP
capable
option
that
was
returned
by
the
server,
and
so
the
client
now
knows
that
to
reach
this
specific
server,
it
can
use
weekly,
pass
CCP
and
so
for
the
next
TCP
connections.
B
It
can
send
a
syn
directly
to
the
final
server
without
using
the
proxy.
So
that's
a
way
to
learn
whether
to
be
able
to
use
MP
TCP
on
the
access
network
through
the
convertor,
when
the
server
does
not
support
the
capacity
and
when
the
server
supports
multi
pass
TCP,
you
can
learn
it
and
use
direct
connection
to
the
server
next
slide.
So
to
summarize
what
happened?
So
we
believe
that
now
we
have
a
simplified
design
which
takes
into
account
all
the
feedbacks
we
receive
from
from
implementers
and
on
the
mailing
list.
B
There
is
ongoing
work
in
different
studies,
ation
about
using
this
convert
protocol
and
it's
being
adopted
by
to
sanitation
bodies.
So
one
is
the
broadband
forum
which
uses
that
as
one
of
the
solutions
for
a
grid
access
network
based
on
eighty
by
CCP-
and
this
is
WT-
three
seven
eight,
which
is
now
being
with
after
approbation,
is
being
finalized,
and
it's
also
used
by
3gpp
for
the
ATSs
service,
which
will
allow
to
combine,
treat
5g
and
Wi-Fi,
but
also
other
types
of
network.
And
this
is
part
of
the
work
in
twenty
three
701.
B
D
Proving
Microsoft
I'm
very
curious,
why
you
decided
to
not
use
TCP
fast,
often
because
fast
open
does
have
an
option
where
the
cookie
size
can
be
zero.
If
the
server
doesn't
care
about
enforcing
security,
the
cookie
size
can
be
zero
and
the
advantage
of
using
fast
open
is
that
most
operating
systems,
at
least
on
the
client
side,
would
already
support
us
and
India
tanderson
for
you,
so
otherwise
you're
looking
at
like
using
raw
sockets
or
something
to
do
so.
B
So
that's
so
we
have
a
specific
protocol
and
we
want
to
be
able
to
confirm
that
the
scene
comes
from
a
client
that
we
know
can
receive
simply
Zak.
So
for
that
we
need
to
have
a
cookie
and
the
cookie.
There
are
two
ways
to
encode
it.
One
is
to
encode
it
in
G,
TCP
other
as
a
TCP
option,
or
we
can
encode
it
in
the
payload.
B
D
E
Just
one
addition
to
the
comment
about
the
the
support
of
the
zero
cooking
for
TCP,
for
so
we
used
to
have
that
in
previous
version
of
the
specification
since
version
4,
but
we
we
have
discussions
with
other
implementers
vendors
and
DC,
that
is,
for
at
least
for
their
user
equipment.
It's
really
difficult
to
have
the
additive
food
supported
directly
as
an
option,
and
this
is
why
we
have
started
with
ODB
to
over
design
and
to
provide
the
same
functionalities
in
detail
instead
of
the
duty
for
so
we
used
to
have
that.
E
D
I'll
be
very
curious
on
you
know
what
kind
of
MIDI
box
issues
you
see
when
you
include
the
renders
and
without
the
T
option,
because
you're
going
to
see
two
patterns
of
traffic
on
the
network.
Now
one
has
30
fortune
and
the
data
and
one,
but
there
is
no
option,
there's
going
to
be
data,
so
it
just
seems
to
me
the
TFS,
the
cleaner
approach
but
yeah.
B
Yeah
we
have
not
yet
tested
that
in
real
network
with
middle
boxes,
but
I
think
the
the
deployment
use
case
that
I
mentioned
on
the
last
light
there
are
use
case
is
where
the
network
operator
will
control
the
palace.
And
so
we
know
that
there
is
no
middle
box
that
will
mess
up
and
whether
we
have
a
middle
box
that
messes
up
with
having
TCP
data
in
the
payload
or
having
a
TTL
option.
I
think
it's
the
same
problem
and
I
don't
see
a
difference.
Making.
F
Even
so
that
part
I,
agree,
I
think
there
are
already
problems
with
metal
boxes.
Even
so
you
have
the
option
and
but
I
think
having
the
option
is
the
matter.
You
know
design
approach
here
if
you're
talking
about
an
application
protocol
because
otherwise
has
changes
in
antics
of
TCP
and
Alain
transport,
but
I
also
need
to
look
at
the
draft
in
detail
because
I
don't
think
you
rule
it
out
completely
so
depend
strongly
on
the
exact
phrasing
in
the
traffic.
What
makes
sense,
but.
B
So
TFO
was
inserted
in
TCP
to
support
web
applications
and
to
have
a
way
to
support
the
establishment
of
sin
of
data
in
the
scene
in
applications
that
cannot
encode
the
cookie
information
in
the
application
itself.
So
we
are
here
in
a
different
situation,
so
we
have
an
application
that
can
support
the
cookie
inside
the
application
itself
and
I
think
it's
cleaner
to
put
the
cookie
information
inside
the
application.
F
B
B
G
Christopher
Apple
I
want
to
address
they
combed
about
the
middle
box
issues
when
using
TFO
without
TFO
option
independent
from
the
convertor.
We
have
done
experiments
and
it
x3
is
much
it's
not
much
better,
but
it's
better
than
with
the
option.
So
you
can
use
same
with
data
and
have
more
RP
10%
higher
success
rate.
D
G
A
So
this
is
Michael
speaking
as
Cheer,
so
and
then
we
have
here
on
this
slide
a
very
unusual
thing:
the
other
stos
directly
reference
our
work.
So
my
question
to
the
author
is:
can
use-
or
maybe
two
other
people
in
the
room.
The
canyon
share
more
details
on
how
this
pack
is
used
by
open
forum
or
3gpp,
and
the
specific
question
that
I'd
like
to
raise
is
if
there
is
an
issue
that
this
is
experimental
work,
because
we've
made
a
call
some
time
ago
that
this
is
an
experimental
protocol
and
I.
A
E
So
as
infomercial
for
the
the
first
question
that
you
asked
about
the
the
current
aggression
in
the
3gpp
specifications,
so
so
far,
the
the
converter
specification
is
the
only
specification
which
is
used
for
the
aggregation
there,
so
the
they
are
relying
exclusively
on
the
solution.
We
are
developing
here
and
the
same
working
group
for
their
work,
and
you
have
some
I
would
say
some
time
constraint
in
the
final
lines
in
the
dispossession.
So
they
are
waiting
for
us
to
finalize
the
RFC
here.
E
So
if
you
can
finance
it
by
May
during
this
year,
this
will
really
be
great
for
offer
their
work,
because
this
there
is
I
would
say
it's
only
editorial
and
notes.
That
is
that
we
need
the
RFC
staff
to
be
published
numbers
so
that
we
we
have
to
work
until
write
it,
as
this
is
for
your
first
question
for
the
second
one,
the
experimental
and
for
sterner
track.
Just
for
the
record.
We
when
we
started
this
work,
we
thought
we
have
done
that
in
the
in
participate
working
group
and
at
that
time
the
impetus.
E
Apiece
facilitation
is
experimental.
So
we
lift
that
that
header
as
experimental
level
there
so
for
us
we
are
opened
here,
if
do
if
there
is
no
extra
delay
in
the
development
of
the
specification.
I
would
personally
be
in
favor
to
consider
to
this
work
as
a
candidate
for
a
first
solid
track.
But
if
this
will
induce
I
would
say
additional
discussions
and
in
terms
of
in
more
long
time
to
have
the
discussion
finalized,
we
can
go
with
the
experimental
and
then
see
the
feedback
from
the
field
and
the
the
deployment.
E
F
A
So
this
is
about
between
5
and
10.
So
my
understanding
of
the
feedback
is
that
we
do
need
an
update
of
the
document,
but
other
than
that,
we
are
hearing
that
this
document
is
moving
forward
and
there
seems
to
be
both
ongoing
implementation
effort
as
well
as
use
in
other
sto.
So
that's
a
good
sign.
So
from
a
chair
perspective,
we
would
head
for
working
through
glass
all
relatively
soon.
A
B
A
F
Hello,
I'm
increment,
my
co-authors,
are
pop
and
Richard,
who
are
here
as
well
today,
and
this
is
a
quick
update
and
I
hope
you're
done
with
this
document.
So
first,
a
quick
recap
was
what
was
the
problem
that
we're
solving
here?
The
problem
is
EC
n
is
there?
Ec
n
is
around
for
a
while,
and
there
is
feedback
from
the
receiver
to
the
sender
to
see
to
tell
if
there
were
any
markings
seen
on
the
past
and
then
the
receiver
is
required
to
or
should
be
reacting
to
it
with
some
kind
of
congestion
reaction.
F
So
to
provide
this
feedback,
there
are
already
three
bits
in
the
TCP
header,
but
the
feedback
itself
is
limited,
just
like
providing
one
signal
per
round
trip
time.
So
you
don't
know
with
the
classic
easy
n
approach.
How
many
markings
have
the
bin?
Has
it
been
like
a
severe
Kadesh
or
just
like
a
very
light
congestion?
And
if
you
have
this
information,
however,
you
can
adopt
your
congestion
control
to
this
information,
make
it
more
reactive
next
slide.
So
what
equity
is
the
N?
F
The
last
time
I
think
already.
There
were
a
main
question
about
what
what
if
this
experiment
fails.
We
have
then
like
burned
all
the
heads
all
the
bits
in
the
in
the
header
for
the
negotiation,
so
we
make
sure
that
all
unused
combinations
still
lead
to
some
kind
of
accurate,
easy
and
negotiation.
So
we
have
a
way
open
to
further,
extend
and
update
the
accurate
ecn
specification
in
future.
Do
you
want
to
interrupt
no.
A
At
least
make
the
work
Michael
speaking
from
the
floor,
so
we
have
the
in
DTF
discuss
this
question
quite
a
bit
the
other
to
me.
What
is
presented
here
is
for
what
compatibility
makes
a
lot
of
sense,
if
you
assure
that
accurate
Sen
is
the
right
thing
to
do
forever,
but
it's
experimental.
So
we
don't
know
what
the
experiment
outcome
will
be
and
in.
A
The
the
last
sentence
so
that
we
make
the
behavior
of
occurs
accurate,
easy
and
predictable
for
future
protocols.
That
assumes
to
me
that
the
future
protocol
is
accurate,
Sen
and
as
individual
contributor
I
personally,
could
foresee
a
world
where
the
iCard
ecn
experiment
fails,
and
we
will
do
something
else
and
then
to
me
the
as
this
specific
world
doesn't
help
at
all.
I
don't
mind
at
the
end
of
the
day
to
me,
as
I
said,
also
to
me
for
an
experiment.
I
The
unexpected,
oh
and
yeah
pop
risk,
oh
yeah,
that
last
sentence,
which
I
write
obviously
didn't,
write
it
in
a
way
that
Michael
would
not
find
an
ambiguous
because
I
meant
to
say
for
future
and
feedback
protocols.
In
other
words,
if
this
fails,
what
do
existing
accurate,
ACN
servers
do
they've
got.
I
You
know
one
of
three
choices
till
they
do
not
ECN
3168
ACN
or
what
they
currently
you
know
what
they've
been
coded
to
do,
which
is
that
crew
ECM,
when
a
client
talks
to
her
ACK
only
to
an
accurate,
easy
answer,
but
this
isn't
to
you
know
any
other
server.
It's
watching
that
creating
server
terrific
gets
a
field
that
currently
doesn't
exist.
You
know
a
value
that
the
current
doesn't
exist.
That's
all
this
questions.
Ask
people.
J
Gory
first
and
as
TS
vwg
chair
culture-
and
this
document
is
kind
of
related
to
the
l4s
environment
and
it
runs
an
experiment
and
L
for
us
is
an
experiment,
so
I'm
just
kind
of
calling
out
that
there
are
no
two
experiments
which
have
some
sort
of
dependency
on
each
other,
and
maybe
this
is
necessary
to
have
some
form
of
ecn
feedback
for
l4s
to
work.
So
I
understand
this
I'm
just
asking
for
a
comfort
level
amongst
the
editors
and
maybe
most
of
the
people,
saying
that
there
are
two
experiments
here.
Just.
F
F
Okay
in
the
recent
version,
we
also
updated,
based
on
a
discussion
from
last
time,
which
was
triggered
by
Praveen.
May
sleep
aid
mainly,
and
we
try
to
clarify
that
the
option
is
really
optional,
and
that
also
means
in
your
depending
use
case.
You
might
have
benefits
and
not
even
implementing
it,
because
that
can
optimize
your
code.
So
we
did
change
the
wording
a
little
bit
to
hopefully
make
this
sure.
No
clear,
no
I
mean
yeah.
F
This
is
the
wording
or
check
the
draft
next
slide,
and
then
the
other
discussion
was
triggered
by
you
Chun,
who
was
favoring
the
DC
TCP
style
feedback
over
and
the
feedback
we
are
providing
right.
Now
we
had
a
lot
of
discussion
about
it,
and
part
of
the
problem
is
that
the
design
we
provide
right
now
is
probably
more
optimized
for
an
internet
scenario
where
you
expect
like
a
low
number
of
congestion
markings.
F
While
in
a
datacenter
scenario,
you
might
have
a
high
number
of
congestion
markings
and
therefore,
in
some
cases
the
data
send
the
TCP
feedback
might
be
more
optimal.
However,
the
optimization
really
depends
on
the
offloading
you're
implementing
and
there
is
a
way
to
also
optimize
the
offloading
for
equities,
and
so
we
talked
a
little
bit
more
about
this
in
the
draft.
But
more
importantly,
is
also
that
there
was
at
the
hackathon.
F
There
were
people
working
on
the
accurate
ECM
page
and
working
trying
to
get
this
in
the
kernel
and,
at
the
same
time,
also
working
on
offloading
and
the
problems
we
see
with
offloading,
we
believe
are
mostly
historical.
We
don't
even
know
how
this
got
into
the
shoe
and
so
they're
trying
to
fix
that
and
provide
patch
patches
for
that
as
well
yeah
and
that's
kind
of
half
way.
What
I
already
said
so
Olevia
two
months
is
working
based
on
like
my
original
type.
F
Proof-Of-Concept
implementation
he's
basically
implementing
a
completely
which
is
great
and
he's
bringing
this
into
the
kernel,
and
the
first
step
will
actually
bring
it
into
the
kernel
without
the
option
to
really
minimize
the
changes,
and
we
have
to
do
in
the
kernel
and
then
later
on,
provide
another
patch
to
to
have
that
the
option
part
as
a
compiler
option.
If
you
want
to
use
it,
so
that's
ongoing
I,
don't
think
the
patch
or
the
request
to
come
has
landed
yet,
but
should
be
this
week.
Probably.
F
A
Okay,
I'm,
sorry,
any
further
comments.
First
of
all,
as
we've
seen
here
on,
the
line
specifically
lasts
quite
a
bit
on
comments
that
are
actually
bought
in
planetary
implementation,
details
such
as
Jiro.
So
that's
why
I
wonder
specifically
to
the
implementers
in
this
room?
Do
we
have
any
further
comments
on
the
most
recent
version
of
the
protocol
or
you
feel
comfortable
is
moving
this
forward
so.
D
I
A
A
A
And
specifically,
among
those
who
have
read
the
document,
is
there
anybody
who
has
concerns
in
starting
a
working
group
last
call
relatively
soon
as
I
say
after
we've
see,
and
we
see
in
the
next
update,
addressing
the
minor
remaining
things.
So
this
would
be
an
excellent
opportunity
to
speak
up,
okay,
so
for
the
note-taker.
So
you
have
no
concerns
regarding
studying
and
working
coop
last
call.
A
So,
regarding
the
next
step
to
the
first
of
all,
I
think
we
are
waiting
for
an
update,
even
if
it's
small
editorials
things,
so
that's
definitely
something
that
should
be
done
and
then
other
the
chairs
will
check.
If
we
can
start
working
group
last
call,
we
can
do
this
in
that
case
anyway
on
the
list,
so
that
doesn't
have
to
wait
for
the
next
meeting.
A
So
now,
I'm,
now
speaking
from
the
presenter
line
and
I'd
like
to
present
a
new
document
that
has
not
been
presented
in
T
CPM
before,
and
it
basically
raises
the
question
whether
we
have
to
start
working
on
a
young
model
for
TCP.
So
next
slide,
please.
So
the
background
of
this
work
is
mostly
TCP
implementations
on
embedded
devices.
So
that
is
something
that
in
key
CPM,
we
actually
hardly
discuss,
because
we
focus
a
lot
on
the
TCP
stacks
that
run
on
hosts.
A
That
is
something
that
has
triggered
a
lot
of
work
in
the
IETF
outside
transport
area,
but
there's
one
specific
issue
that
now
matters
positively
to
this
working
group,
namely
the
fact
that
we
do
have
a
TCP
map.
So
it's
standardized
in
RFC,
4022
and
I
try
to
figure
out
who
has
implemented
it.
So
I've
found
data
sheets
of
at
least
three
different
devices
from
different
vendors,
claiming
that
they
have
an
implementation
of
that
myth.
A
Now
the
IETF
has
decided
to
deprecated
MIPS,
more
or
less
or
at
least
to
move
to
a
to
net
config
as
a
superior
technology,
and
that
now
raises
the
questions
of
what
happens
with
the
management
of
TCP
stacks
on
devices
that
will
maybe
at
the
moment
is
full
manifest
and
peep
at
the
management
move
is
to
net
convey.
Hang
there
is
one
candidate
solution
for
devices
that
have
used
the
myth
and
there
is
a
way
to
translate
a
myth
to
a
young
module.
A
So
there's
an
algorithm
to
do
that
and
in
fact,
I
have
found
signs
that
vendors
might
even
do
that.
But
of
course,
that
is
a
translation
of
a
pretty
old
myth
and
if
you
check
the
details
of
that
myth-
and
you
will
see
that
it's
a
very
old
RPC
actually
which
brings
up
the
question
if
this
is
indeed
the
right
thing
to
do,
and
that's
specifically
the
question
I'd
like
to
raise
here
and
I
want
to
put
up
a
disclaimer
right
from
the
beginning.
A
So
this
talk
is
really
about
devices
that
are
managed
by
network
management
protocols
such
as
net
config,
a
risk
on
seeing
I'm,
not
necessarily
speaking
about
an
end
host
operating
system.
Here
because
thought
you
know
very
well
that
there
are
other
means
to
configure
TCP
stack
so
I'm
talking
here
about
devices
that
are
natively
managed
by
yang
modules
in
order
to
analyze
a
bit
little
bit
more
detail.
What
we
have
right
now
there
and
I've
put
on
the
next
slide.
A
The
content
of
the
TCP
mmit,
but
I
decided
not
to
use
the
map
itself,
but
instead
to
convert
it
to
a
young
module.
I'm
not
saying
that
this
is
a
reasonable
thing
to
do,
but
it
at
least
gives
overview
of
what
the
TCP
map
originally
has
standardized,
and
it
also
provides
solution
how
a
young
model
could
hypothetically
look
like
if
we
decided
to
standardize
a
young
module
for
TCP?
You
can
see
these
auto-generated
the
immortal
here
or
more
specifically,
the
tree
version
out
of
it.
So
this
is
not
there's
no
editing
here.
A
It's
an
fully
automatic
convert,
a
conversion
of
the
TCP
map
and
if
you're
a
little
bit
familiar,
is
the
tree
structure
of
your
modules.
You
can
see
here
the
key
pieces
that
we
have
in
this
young
module.
So
the
first
important
point
to
note
is
that
most
information
in
the
TCP
mate
is
weak
only
so
actually
it's
mostly
amid
for
monitoring.
There
is
hardly
any
configuration
in
that
map,
except
for
one
detail
as
if
you
go
into
another
part
of
this
slide,
you
can
see.
A
And
it
contains
more
or
less
four
different
sorts
of
entries.
First,
there
is
a
little
bit
of
information
about
the
TCP
configuration,
but
it's
limited
to
the
configuration
of
the
retransmission
time
mode,
most
notably
about
the
minimum
time
mode,
and
if
you
look
more
detail,
but
it's
actually
in
the
mid,
it's
pretty
outdated,
they're,
pretty
outdated
references
to
TCP
is
back
there.
A
Second,
the
mid
includes
a
number
of
stat
counters,
for
example
on
connection
statistics,
and
then
there
are
two
tables:
there's
a
connection
table
that
basically
lists
the
active
connections
of
that
device
and
there's
a
TCP
listener
table,
and
that's
basically
what
the
myth
is
all
about.
So
we
see
it's
a
pretty
simple
myth
after
all,
and
that
is
probably
also
one
of
the
reasons
why
it
might
have
being
implemented
since
it's
a
relatively
simple
model,
and
that
obviously
brings
us
to
the
question
on
the
next
slide.
A
So
is
this
something
that
we
have
to
care
about
and
I
have
started
this
document,
mostly
as
a
placeholder
I
mean
I
will
not
argue
that
this
is
something
that
can
be
adapted
at
that
point
in
time.
But
still
if
we
look
at
what
happens
in
other
working
groups,
how
much
effort
is
spent
on
designing
young
modules?
So
there
are
to
me
there
is
a
question
in
the
room
whether
we
need
something
in
that
space.
A
C
A
We
have
the
wording
in
there
that
talks
about
interfaces
that
maintained
his
EP
utilities.
This
is
the
network
management
interface,
so
I
believe.
If
we
decided
to
do
something
in
the
longer
run,
it
would
be
in
the
Charter
and
second
other
IETF
work
in
that
space
is
to
be
standard,
strict,
which
is.
We
have
a
very
pretty
high
bar
here
in
in
TCP
my
standards
check
and
with
that
I
think
we
have
a
question.
K
Michael
Abraham's,
it's
a
less
of
a
question
or
more
of
a
statement.
Actually,
so
we
are
using
that
company
Hank
to
manage
everything,
we're
trying
to
move
in
that
direction
where
a
mid
administrating
servers,
home,
routers,
core
routers,
whatever
most
most
of
these
things
actually
speak,
TCP
I
would
like
a
comprehensive
yang
model,
so
please
do
not
just
alter
translate
whatever
we
had
lying
in
the
cupboard,
but
a
comprehensive
work
on
typical
settings
that
is
available
own
devices.
Okay,.
A
That's
a
fair
point:
I
mean
one
of
the
things
that
we
would
I
could
offer.
Is
that
I
would
try
to
reach
out
to
vendors
in
that
space,
because
this
is
something
obviously
where
we
need
input
from
vendors
on
what
this
set
of
typical
things
is,
of
course,
we
know,
and
there
are
a
couple
of
parameters
that
are
pretty
common
for
TCP
configuration
that
are
probably
uniform,
but
obviously
that
is
something
where
typically
as
a
vendor
input
would
be
highly
appreciated.
K
L
Laura
Secord
so
I
remember
when
we
did
the
MIPS
right,
because
that
was
the
cool
thing
that
everybody
needed
to
manage
their
networks
and
it
was
super
painful
because
nobody
actually
cared
and
and
and
E
steps
was
even
more
painful
and
than
the
basic
MIT
pride.
And
so,
unless
there's
like
a
line
of
operators
out
the
room
that
that
screamed
for
this
I
would
not
want
to
do
this.
Simply
because
it's
gonna
be
very,
very
painful
and
very
very
hard,
and
you
know
yeah.
You
can.
L
L
E
So
I
think
that
we
can
find
a
balance.
This
work
I
think
that's
something
which
is
really
interesting.
That
can
be
useful.
It
is
useful
because
if
we
see
some
simple
existing
models,
we
will
say
that
they
are
catching
on
TCP
stuff.
Take,
for
example,
the
BGP
ongoing
yang
model.
You
will
see
that
they
are
defining
some
generic
TCP
and
parameters
there.
E
So
what
personally
I
would
like
to
see
if
you
have
something
here,
anticipate
that
this
are
I
would
say
our
first
level
bar
in
terms
of
the
generic
parameters
that
we
we
can
use
to
tune,
the
TCP
stack
for
a
sense.
It
misses
this
kind
of
stuff
and
so
on.
So
this
modules
can
reuse
what
you
you
can
define
here.
A
generic
I
would
say,
parameters
for
configure
on
TCP
M,
so
I
provided
the
example
of
of
BGP.
E
You
know
mono,
but
you
can
can
find
other
models
in
which
they
are
touching
exactly
on
the
same
stuff.
So,
instead
of
having
something
which
is
sprayed
a
monic
among
a
list
of
yang
model,
I
prefer
to
have
something
which
is
ready.
I
would
say
the
reference
not
have
everything.
As
mentioned
by
thickness
this
one,
it
will
be
really
painful
to
have
something
which
is
already
comprehensive
from
from
day.
One
just
start
with
something
which
is
really
I
would
say,
balanced
and
then
take
the
work
from
there.
I
support
this
work.
K
Michael
Abramson
again,
of
course,
I'll
take
less
if
I
actually
get
anything.
It
is
not
an
all-or-nothing
proposition
and
I
know.
There
are
a
few
people
who
like
or
the
best
thing
that
they
know
is
to
create
the
young
model
for
management.
This
has
never
been
a
sexy
thing.
Very
few
people
enjoy
it.
It's
painful.
Okay
I'll
give
you
that,
but
there
must
be
a
bunch
of
like
basic
functionality.
The
mouth
most
TCP
stacks
share,
I've
turned
on
soon
cookies
in
a
bunch
of
different
operating
systems,
so
I
mean
there.
K
There
must
be,
at
least
like
I,
don't
know,
30
50
parameters.
That
is
very
common,
so
we
can
start
there.
That's
a
perfect
first
delivery,
just
the
knobs
they're,
typically
there
in
most
operating
systems,
and
we
can
augment
later
it
did.
It
doesn't
have
to
be
perfect
from
day
one
and
it's
very
to
deliver
something
in
six
months
and
then
another
thing
in
a
year
more
enhancements
in
a
year
just
keep
the
pipeline
going
then
to
get
a
prayer
to
sit
on
it
for
four
years
until
we
think
it's
perfect.
So.
A
Just
to
be
clear,
I
don't
believe
that
we
can
ever
come
up
with
a
very
comprehensive
model.
I'm
so
because
I
mean
TCP,
six
do
differ.
What
is
possibly
doable
is
to
it
the
basic
knobs
that
directly
map
to
aspects.
So
a
couple
of
things
where
we
have
an
hour
see
that
basically
specifies
that
there
is
something
that's
optional
that
can
be
turned
on
and
off,
and
that
is
something.
A
L
Logically,
I
want
a
pony,
so
this
is
this
is
gonna,
be
the
same
thing
we
did
with
the
MIT
model
right.
It's
gonna
get
super
complicated
very
quickly,
even
with
certain
cookies,
but
even
like
something
very
simple:
it
changes
from
kernel
version
to
kernel
version,
potentially
right.
It
changes
from
operating
system
to
operating
system
multiply
this
with
all
the
different
options
and
parameters.
We
have
that
this
is
and
and
given
that
we
don't
have
one
now
and
we
don't
seem
to
have
huge
operational
problems.
A
M
So
I'm,
looking
at
this
at
the
moment
from
the
taps
perspective
and
we
will
soon
have
four
taps
to
define
some
way
to
set
TCP
specific
options
and
I
think
having
a
yang
model
to
import
there
and
just
say:
okay,
this
is
the
list
of
properties.
We
can
import
into
taps
and
say
these
are
the
see
specific
options
would
be
really
really
cool
so
but
I'm
only
seeing
me
from
a
consumer
of
that
model
and
not
as
the
one
who
was
doing
the
hard
work.
Sorry.
H
M
It
is
connection
with
specific,
but
I
think
going
that
through
that
direction
could
also
be
interesting
to
see
which
of
the
settings
of
this
Dieng
model
cannot
be
applied
on
a
comet
per
connection
basis
and
which
can
only
be
said
as
a
system
default.
So
I
think
these
are
two
different
kinds
of
settings
on
different
branches
of
a
tree
which
might
make
sense
to
say.
Okay,
these
are
connection
specific
settings,
and
these
are
settings
that
are
globally
for
the
system.
H
A
Okay,
so,
as
I
said
before,
I
mean
at
the
moment
this
is
a
placeholder,
so
we
don't
have
to
come
to
come
to
come
to
a
conclusion
in
this
meeting.
So
technically
I
think
we
can
decide
to
do
it.
These
four
things
here
in
this
working
group.
The
first
thing
is
what
laws
has
just
suggested
to
do
nothing,
the
other
two
options.
A
B
A
A
Know
that
my
proposal
is
I
would
want
you
to
have
a
second
talk
on
that,
maybe
in
the
next
IDF
meeting,
with
some
additional
insight
on
my
site.
So
that
would
be
my
offer
to
this
working
group.
But
if
the
working,
who
believes
that
you
should
stop
this
activity
just
right
now,
then
I
will
also
be
fine.
So
that
is
if
you
in
particular,
if
you
believe
that
I
should
not
invest
any
further
cycles
on
that.
So
it
would
be
perfect
to
say
that
no.
H
So
I
think
it's
still
an
individual
contribution,
so
you
can
do
whatever
you
want
all
right,
but
I
got,
but
the
feedback
I
got
or
the
impression
I
got
is.
If
you
can
reach
out
to
come
operators
and
figure
out.
If
there
is.
If
there
is
a
need
for
using
this
stuff,
then
it
might
help
us
to
address
the
concerns
raised
by
loss.
I
mean
we
had
positive
and
negative
in.
If
we
can
figure
out.
If
we
will
have
consumers
of
this
work,
then
you
might
make
a
positive
decision.
I
A
I
H
I
I
If
you
set
ect
on
the
sin
since
work
out
the
solution
written
the
patch,
it's
gone
into,
the
limped
mainline
pipeline,
Bentham
feed
bang
come
out
and
gone
back
in
again,
but
basically,
we've
worked
our
way
so
that
when
you
set
easy
on
the
sin,
if
you're
using
accurate,
ACN
or
any
future
protocol
that
might
replace
it
the
test
that
turns
obviously
and
won't
happen.
It
will
only
happen
if
it's
a
an
old
PCN
setup
sin.
I
I
N
So
thanks
I'm
I'm
doing
whistles.
This
is
a
presentation
about
a
draft
with
co-author
John
Kristoff
I
like
to
thank
the
chairs
for
inviting
me
to
come
and
present
this.
This
is
work.
That's
really
in
the
DNS
op
working
group,
so
I
don't
normally
participate
in
in
TCM,
but
I
hope
you
find
this
at
least
a
little
bit
interesting.
N
Here's
here's
the
abstract
from
from
the
draft-
and
it
talks
about
how
this
is
a
document.
That's
about
encouraging
operators
to
permit
DNS
over
TCP
and
describe
some
of
the
problems
that
can
be
encountered
if,
if
that
is
not
done
next,
please
so
a
little
bit
of
how
we
got
to
this
point
going
back
to
our
C
11:23,
it
talks
about
DNS
talks
about
UDP
and
TCP.
It
says
you
know,
servers,
resolvers
and
servers
must
support
you
to
be
and
should
support
TCP
for
for
non,
don't
transfer
queries
later.
N
Rfc
says
that
UDP
is
the
chosen
the
preferred
protocol,
although
TCP
is
used
for
zone
transfers.
So
after
this
point
of
time,
it
sort
of
became
folklore
that
that
you
know
most
DNS
occurs
over
UDP
and
TCP
is
really
only
used
for
zone
transfers
and,
and
we
find
cases
where
it's
blocked
in
firewalls,
where
you
know
you
would
not
expect
some
transfers
to
occur.
Things
like
that
in
the
late
1990s
DNS
started.
N
N
So
the
reality
is
when
in
DNS,
if
you
have
to
deal
with
large
responses
sort
of
bigger
than
MTU
size,
your
choice
is
either
to
fragment
or
to
truncate,
and
we
see
problems
in
both
cases,
fragments
or
sometimes
blocked
by
firewalls,
and
if
you
truncate,
then
it
leads
to
TCP
and
TCP.
/
DNS
is
also
sometimes
blocked
by
firewalls
and,
of
course
it
adds
latency
and
so
on.
So
the
situation
that
we're
now
is
that
DNS
clients
have
a
lot
of
complex
retry
logic
to
try
to
work
out.
You
know
work
through
all
these
problems.
N
They'll
do
things
like
retrying
their
queries
with
different
Adina's
buffer
sizes
to
to
try
to
find
the
the
point
at
which
their
queries
are
able
to
succeed.
Next,
please
so
RFC
77
66
was
was
written
to
update
the
older
59
66.
This
is
an
in
RFC.
That's
all
about
implementation.
Requirements
for
DNS
over
TCP
does
not
make
any
recommendations
to
operators,
but
it
talks
about
things
listed
here.
It
talks
about
how
all
implementations
must
support
UDP
and
TCP.
N
It
says
that
resolvers
or
clients
may
elect
to
use
TCP
first,
without
without
trying
UDP
it
recommends
and
and
to
keep
idle
connections
open
for
the
order
of
seconds
or
as
the
old
RFC
said.
You
know,
two
minutes
says
the
servers
can
impose
limits
on
the
number
of
TCP
connections
and
devices
clients
to
be
prepared
to
handle
out
of
order
responses.
N
Next,
please
so
this
draft
that
we're
talking
about
today
is
sort
of
a
companion
to
that,
whereas
the
other
one
is
about
implementation
requirements.
This
one
is
about
operational
requirements
and
the
idea
is
to
you
know,
encourage
operators
to
be
aware
of
these
issues
and
ensure
that
DNS
over
TCP
is
handled
equivalently
as
as
a
distant
UDP.
So
it
says
a
lot
of
these
bullets
here.
Just
cut
and
pasted
from
from
the
draft-
and
you
know
says:
authoritative
servers
must
service,
TCP
queries
for
these
reasons,
and
recursive
servers
must
service.
Tcp
queries
for
similar
reasons.
N
Talks
about
nameservers
may
need
to
limit
resources
devoted
to
TCP,
but
must
not
refuse
to
service
query
just
because
it
would
have
succeeded
over
some
other
transport
protocol.
It
has
a
lot
to
say
about
how
filtering
of
DNS
over
TCP
is
is
harmful
and
in
general
case
also
recognizing
that
there
may
be.
You
know
local
policy
cases
where
you
would
need
to
do
some
of
that
kind
of
filtering,
and
it
says
that
network
operators
must
allow
DNS
service
over
both
UDP
and
TCP.
N
So
getting
it
to
a
little
bit
more
detail
there.
There
are
different
sections
in
the
draft
that
talk
about,
for
example.
First
about
connection
admission,
it
mentions
that
syn
cookies
are
effective
in
mitigating
syn
flood
attacks
that
services
intended
for
use
by
anyone
should
be
protected
or
I'm.
Sorry,
not
not
intended
for
use
by
by
the
public
should
be
protected
with
access
controls.
Specifically,
medicine
mentions
this
except
filter
in
FreeBSD
operating
system.
N
Applications
must
not
be
configured
to
refuse
TCP
queries
that
were
not
preceded
by
UDP.
So
getting
back
to
that
earlier
point
about
it's
allowed
to
use
TCP
first
and
there's
a
couple
of
recommendations
about
fast
open,
so
servers
should
enable
fast
open
when
possible,
clients
should
enable
faster
than
what's
possible,
and
so
on.
N
So
on
connection
management.
There's
quite
a
bit
to
say
that
you
know
says:
servers
must
actively
manage
their
connections.
You
know
to
avoid
avoid
resource
exhaustion.
Server
software
can
provide
limits
on
the
total
number
of
established
connections
and
as
an
operator
sort
of
ear
job
to
make
sure
that
those
limits
are
appropriate.
N
N
And
it
has
a
little
bit
to
say
about
terminating
TCP
connections.
Importantly,
it
says
that
it's
preferable
for
clients
to
initiate
close
of
TCP
connection,
but
on
systems
where
a
lot
of
connections
are
observed
in
the
time
weight
State,
it
may
be
beneficial
to
tune
the
local
TCP
parameters
to
keep
that
sort
of
manageable.
In
an
extreme
cases,
it
may
be
necessary
to
use
the
so
linger
option
with
a
value
of
0
to
prevent
those
connections
accumulating
and
a
few
less
miscellaneous
points.
N
The
recommendations
in
this
draft
apply
equally
to
DNS
over
TLS,
which
is
RFC
70
at
58,
as
they
do
to
just
DNS
over
TCP.
This
draft
does
not
make
any
references
to
DNS
over
HTTP
at
this
time,
at
least,
and
a
few
points
about
logging
and
monitoring
that
implementers
need
to
be
aware
of.
You
know
that
TCP
makes
this
a
little
bit
harder.
You
know
don't
ignore
things
like
connection
reuse
and
pipelining
and
out-of-order
responses,
and
so
on,
and
that's
basically
the
rundown
of
the
draft
happy
to
take
any
questions.
D
N
There
probably
are
I
I
know
that
so
I
was
a
co-author
on
the
DNS
over
TLS
documents
and
associated
with
out.
There
was
a
study
sort
of
theorizing
how
you
know
what
it
would
look
like
if
there
was
a
lot
more
DNS
over
TLS
and
I
and
I.
Think
as
a
part
of
that,
there
may
be
some
some
background
on
what
what
are
the
current
levels
of
DNS
over
TLS
I
can
tell
you
for
that.
D
N
There's
no
recommendation
specifically
about
that
it.
Just
it
just
says
that
it's
you
know
it's
up
to
the
clients
choice.
It
doesn't
mention
happy
eyeballs
by
name,
but
I.
Think
you
know
certainly
expectation
is
that
if,
if
you're,
if
you're
doing
the
NS
over
TCP-
and
if
you
have
that
connection
already
set
up,
then
it
makes
sense
to
reuse
that
connection
rather
than
say
go
back
to
UDP
years.
Thank
you.
O
Tommy
poly
Apple,
so
thanks
for
doing
this,
I
think
this
is
a
useful
thing
to
encourage
people
to
do
and
support
more.
So
that's
nice
to
see
just
two
comments.
One
so
I'm
glad
that
you're
saying
that
the
server
should
support
TFO
I
was
a
little
bit
more
concerned
about
saying
that
all
the
DNS
clients
should
do
TFO.
O
I'm
me.
It
may
be
good
to
like
soften
that
to
a
may,
just
because
you
know
there
are
some
issues
with
deployment,
and
this
is
oftentimes
for
clients
at
rare
enough
case
that
they
actually
are
using
TCP
that,
if
we're
doing
TFO
in
those
cases
it
may
not
get
tested,
as
much
of
they
may
not
notice
the
edge
cases
until
they'd
break
in
certain
deployments.
Okay
or.
N
O
O
D
Regarding
the
tier
4
problems,
I
think,
like
the
trend
seems
to
be
that
the
operating
systems
are
building
and
fallback
mechanism,
so
the
app
should
ideally
be
unaware
of,
or
be
able
to
use
the
API
freely
without
having
to
worry
about
yeah.
So
even
if
we
recommend
should
I
think
there
should
be
a
caveat
there
saying
there
are
known
issues
with
tier
4
which
are
being
resolved,
so
some
experimental
should
be
done
before
rolling
out
to
change.
So.
N
D
D
I
say
application:
I
mean
the
DNS
resolver.
It
could
be
wherever
it
could
be
in
the
browser
or
the
operating
system.
Okay,
general
I'll
push
back
against
that
gently
to
say
that
yeah
don't
punish
it.
It's
it's
it's
not
because
I
don't
think
applications
can
measure
it
I.
Just
don't
think
that
the
people
who
are
building
this
always
have
the
expertise
to
understand
the
failure.
Modes
of
TCP,
fast,
open,
they're,
very
subtle
and,
and
you
know,
we're
trying.
D
We
are
discussing
them,
trying
to
figure
out
how
to
how
to
work
around
those-
and
you
know
hopefully
so
then
later
they
will
be
part
of
the
default
behavior
in
Mac,
OS
and
Linux
and
Windows.
When
that
happens,
it'll
naturally
be
available.
So
you
might
say
we
are
available
in
the
underlying
platform.
Clients
should
use
it
or
may
use
it
so.
D
N
A
Okay,
so
thanks
a
lot
for
these
presentations.
Obviously
this
is
not
a
tea
CPM
working
group
document,
but
they
thought
it
would
be
useful
to
discuss
it
here
and
I
think
you
also
got
some
useful
feedback.
Thank
you.
So
thanks
a
lot
and
this
that
we
would
move
forward
to
the
next
talk
that
it's
also
not
about
it
is
TCP
and
working
group
documents
or
please
praveen.
D
This
continuous
kind
of
the
series
of
talks
have
been
doing
about
what
we
are
doing
with
improving
TCP
in
the
windows.
Networking
stack
work
done
with
others,
so
this
is
a
quick
recap
of
all
the
advancement
we
have
recently
done.
The
good
news
is
that
it's
all
these
improvements
are
now
on
about
800
million
plus
machines.
D
Tcp
is
evolving
on
the
Internet,
so
IW
10
was
made
the
default
for
all
TCP
connections.
The
current
loss
recovery
mechanism
is
rack,
which
I
will
touch
upon
later.
In
this
talk,
so
rack,
Australis,
probe
cubic
congestion
control
is
now
default
on
all
these
systems,
for
all
outgoing
connections,
tcp
faster
one
is
limit,
is
enable
for
limited
websites
on
the
edge
browser
with
aggressive
fallback.
It's
still
there
still
mailbox
issues,
so
the
operating
system
API
internally,
has
logic
to
fallback
the
connection
to
regular
TCP.
D
If
there
are
any
problems,
experience
with
Tier
four,
if
you
are
interested
in
the
details
of
that
algorithm,
I
have
done
a
previous
talk
in
TCP
M,
which
has
the
algorithm
led
by
plus
process
a
new
congestion
control
which
I
also
presented
in
nice,
energy
I.
Think
three
IDs
back.
This
is
also
enabled
only
for
limited
scenarios.
Currently
any
kind
of
background
transfers
like
crash
number,
pods
or
Windows
updates
the
DL
attack
timeout
was
reduced
to
40
milliseconds.
Traditionally
it
used
to
be
200
milliseconds.
This
puts
Windows
on
par
with
what
Linux
is
doing.
D
This
I
think
over
the
long
term
will
help
a
lot,
because
currently
the
tailless
probe
algorithm
requires
you
to
account
for
the
worst
case,
delay
rack
time
or
which
is
200
milliseconds
in
time.
We
hope
that
we
will
be
able
to
reduce
that
to
40
the
windows.
Implementation
always
followed
appropriate
byte
counting
so
because
it
does
not
look
pacing.
The
limit
was
for
MSS
worth
of
condition.
Low
growth
per
rack
and.
C
D
D
The
two
like
you
know:
what
did
he
ever
talk
about?
Why
it
is
that
he
chose
to
go
to
cubic
yes,
so
I
actually
talked
about
that
previously.
But
to
recap
what
the
problem
was,
that
compound
was
very
delay
sensitive
and
we
were
finding
that,
especially
in
the
virtualized
environment,
because
there
is
another
set
of
hops
that
you
go
through
in
the
host.
There
were
a
lot
of
latency
fluctuations
that
would
throw
off
compound
and
have
it
react
very
negatively
to
increase
delay
and
we
found
that
cubic
performs
better.
D
We
understand
that
compound
was
more
well
behaved
in
terms
of
buffer
bloat
because
it
would
back
off
and
the
roasted
latency
increase,
but
in
terms
of
raw
performance,
especially
when
you're,
let's
say
copying
large
files
across
two
regions,
Cuba
could
consistently
win.
Of
course,
there
is
increased
latency,
but
in
terms
of
throughput,
cubing
did
better.
D
So
we
have
improved
how
slow-start
works
in
Windows,
starting
with
hybrid
slow
start.
So
previously,
when
time
stands
were
disabled,
which
is
the
default
configuration
of
the
Windows
Network
stack,
the
network
stack
was
only
collecting
one
IDT
sample
per
round,
but
for
high
start
to
be
implemented
effectively,
we
made
a
change
to
collect
more
RTT
samples
per
wrong.
This
has
a
little
bit
of
performance
overhead,
but
this
was
necessary
for
us
to
be
able
to
implement
high
stuff.
D
So
we
implemented
limited
slow
start
immediately.
After
heist
returns,
so
what
limited
slow
start
is
is
a
less
aggressive
form
of
supposed
route.
It
comes
from
our
SCS
and
it's
three
seven
four
two,
but
that
RFC
talked
about
going
limited,
slow
start
right
from
the
start.
So
what
we
find
was
a
max
ssthresh
parameter
and
the
recommendation
there
was
like
hundred
the
value
of
100
MSS.
D
But
what
we
do
we
are
doing
here
is
that
if
we
do
exit
slow
start
too
early
we're
doing
limit
or
slow
start,
because
then
the
condition
window
ramp
remains
at
least
a
little
bit
more
aggressive
than
what
you
would
have
in
condition
avoidance.
We
test
this
and
we
find
this
to
work
the
best
amongst
all
the
various
experiments
we
did
for
modifying
slow
start.
We
did
choose
to
modify
a
parameter
from
that
RSC,
so
the
K
value
was
computed
using
point
Phi,
which
means
that
you
know
at
most
MSS
by
two
growth
rack.
D
D
So
this
is
super
interesting
I
believe
that
Linux
actually
does
have,
but
somebody
else
has
to
do
to
verify
that.
But
but
did
you
find
that
just
leaving
it
at
regular,
slow
start
after
you
exit?
What
did
what
happened
when
you
did
that
so
so,
at
the
end
of
high
story,
inter
congestion
awareness
immediately
right,
so
that
occurs
so
that
problem
is
then
that
if
we
exit
to
early
life
like
the
min
cab,
there
is
like
16
MSS
is
the
ssthresh
gap
and
I
start.
D
So
if,
if
a
well
is
higher
than
that,
you
could
exit
very
early
and
then
you
could
have
ample
capacity
and
just
because
of
congestion,
you,
you
know
you
exited
too
early,
and
then
we
find
that
limited
slow
start
allows
you
to
catch
up.
It's
not
as
good
as
as
fast
or
slow
start,
but
is
still
better.
Oh
I
see
so
you're
leaving
the
inter
packet
arrival
there
you're
leaving
you
remove
the
inter
packet
yeah.
The
interpret
of
arrival
is
not
implement.
It
is
only
delay.
Q
C
D
D
D
I,
presumably
the
loss
rate
also
goes
down
with
LS
s.
Yes,
that's
probably
natural.
Okay,
thank
you
just
to
clarify
that
this
was
all
of
this
is
done
without
pacing,
so
I
we
have
not
done
any
measurements
on
how
this
would
behave
with
pacing.
Did
he
do
any
measurements
in
the
wild,
or
is
this
lab
measurements?
These
measurements
were
done
in
the
lab,
and
then
we
collected
data
about
time
outs
out
for
slow
start
and
you
got
it
so
things
are
better
now.
Thank
you.
D
It's
like
so
Rach.
Currently
the
implementation
is
compliant
with
rack,
which
means
it
follows
pretty
much
all
the
recommendations
in
that
draft.
This
is
done
in
congestion
in
conjunction
with
the
traditional
packet
based
measurement,
which
was
you
know,
the
three
do
pack
threshold
so
that
portion
of
that
allows
detection
logic
has
not
been
removed.
D
Setting
time
or
that
can
be
used,
which
we
are
not
currently
doing,
there's
an
optimization
to
be
able
to
sort
the
send
packets
in
time
order.
The
reason
we
don't
do
this
yet
is
because
we
don't
currently
have
the
ability
to
read
it
read
transmittal
a
story,
transmits,
and
if
you
want
to
do
that,
then
the
RFC
recommends
that
you
keep
a
list
sorted
and
time
order.
So
these
are
two
optimizations.
We
are
looking
forward
to
do
next,
but
it's
currently
not
implemented.
D
L
D
D
The
the
packet
based
on
automatically
happens
when
you
receive
sac
information
from
the
network,
so
it
will
immediately
enter
loss
recovery
if
the
loop
delicious
exceeded
in
the
time
based
implementation.
Yes,
we
currently
don't
have
a
timer,
but
that
is
also
triggered
by
sacking
or
ACK
information,
but
you
may
not
have
crossed
the
to.
D
A
D
D
It's
unfortunate
that
it
was
not
published
but
I
believe
the
changes
were
made
and
the
private
working
copy
or
something
in
terms
of
text
I,
do
think
that
they
probably
need
to
be
a
little
bit
more
work,
but
in
terms
of
the
algorithm
I
think
it
look,
looks
good
those
things
that
are
not
implemented
or
specifically
called
out
as
nay,
so
those
are
just
optimizations,
but
all
the
shows
are
implemented.
Okay,.
D
N
D
Just
an
addendum
I
believe
that
the
Linux
stack
has
moved
away
from
packet
based
detection
completely
onto
rack,
but
I'm,
not
completely
sure.
We
would
like
to
do
that
in
the
future,
where
currently
we
are
keeping
the
so
sorry
I,
don't
have
data
to
say
which
one
is
better
or
whether
removing
the
toothbrush
causes
any
problems,
because
we
haven't
done
that
yet
Thank.
R
Q
D
Well
next
slide,
please
so
the
initial
RTO
as
traditionally
was
always
at
three
seconds
so,
and
the
tin
sintering
retransmissions
were
capped
at
two,
so
the
total
time
out
for
TCP
would
typically
end
up
being
21
seconds.
So
we
are
now
lowered
the
initial
RTO
to
like
one
second
by
d4,
which
is
the
RFC
defined
minimum.
The
total
time
order
still
kept
its
21,
so
it
ends
up
being
more
retransmissions,
but
this
was
done
for
app
compat
reasons.
D
The
other
thing
I
would
like
to
point
out.
Is
it
previously
if
this
change
has
been
done
for
a
while?
But
previously
you
know
some
of
the
options
like
window
scaling
would
get
removed
when
there
was
simply
retransmissions,
but
currently
the
only
option
that
we
remove
if
there
are
same
retransmissions
is
TFO.
D
So
the
other
thing
I
would
like
to
say
is
that
there's
also
an
API
for
applications
to
set
more
aggressive,
timeout
values
if
they
so
desire,
and
there
is
a
global
configuration
as
well
general
anger.
So
yet
this
is,
this
is
great.
I
was
actually
going
to
ask
you
if,
if
you
have
any
way
to
reduce
it
little
one?
Second,
but
have
you
done
any
experiments
to
see
what
happens
when
you
do
it
better
one?
Second?
D
D
I
can
lower
it,
but
I
don't
have
any
way
of
measuring
whether
that
caused
retransmissions
or
not
that's
a
fair
point
yeah,
but
in
terms
of
the
API
we
are
assuming
that
the
application
would
do
their
testing
before
making
such
a
change
from
the
operating
system.
Point
of
view,
one
second
seems
like
the
safe
value.
D
There
is
cases,
for
example,
where,
if
you
had,
if
you
had
previous
path,
RTT
measurements,
we
could
actually
were
lower
because
I
mean
even
if
there
is
even,
if
you're,
taking
different
paths
to
the
same
destination.
If
we
could
have
a
ballpark
value,
like
I
mean
you
could
have
a
function
of
like
2x
or
3x,
the
the
max
or
TT
use
on
the
previous
connection,
but
currently
we
don't
implement
that
optimisation
right.
D
So
just
going
back
to
the
question,
I
appreciate
that
as
there's
there's
definitely
value
in
exposing
the
knob
to
an
application
that
might
have
information,
but
if
I
was
to
do
an
experiment
with
this
I
wouldn't
know
how
to
do
it
on
Windows,
because
I
would,
on
the
experimental
as
an
application.
I
couldn't
actually
gather
information
about
the
number
of
free
transmissions
that
wasn't
oh
now,
you
can,
because
there
is
a
TCP
info,
socket
option
API,
so
you
can
actually
gather
statistics
on
the
connection
after
the
connection
is
done.
Oh
thank
you.
K
Michael
Abramson
again
ECM
support
what
you're
thinking
there
I
think
it's
still
off
everybody
fault
right.
D
D
So
currently
we
are
trying
to
experiment
with
TFO.
So
that's
the
reason
we
have
not
for
the
bzm,
it's
something
we
are
interested
in
doing
in
the
future,
but
currently
because
the
network
doesn't
support,
there's
not
a
lot
of
like
easy
and
marking
actually
happening
on
the
network,
so
the
I
mean
it's.
K
Being
an
operator
I
have
a
hard
time
getting
it
turned
on,
because
people
are
saying
nothing
nobody's
using
it.
The
second
one
is
I,
know
I
know,
but
Windows
is
still
kind
of
a
large
ish
layer
in
the
area.
The
the
second
thing
this
is
relate
to
the
Xbox
stack
as
well.
Yeah.
C
K
A
Ok,
so
thanks
a
lot
for
his
total
you're
hearing,
your
comment
on
the
REC
one,
so
the
chairs
will
talk,
is
the
others
on
what
to
do
and
be
well
do
from
us,
but
things
look
for
is
very
useful
information.
Thank
you.
D
D
There,
a
pink
box
here
am
I
supposed
to
be
in
a
pink
box:
yeah,
okay,
fine
Oh
God,
and
we
have
a
German
for
a
chair
too,
so
that
helps
alright.
So
this
is
a
continuing
series
of
discussing
quick
loss
recovery
in
TCP
M,
and
the
hope
here
is
to
is
to
get
more
people
in
this
community
engaged
in
the
in
then
the
recovery
draft
and
the
discussion
of
loss
recovery
in
DCP.
D
So
this
is
again
the
intent
here
is
to
is
to
give
you
a
sense
for
what's
going
on
there,
it's
less
to
litigate
the
constants
and
various
things
here
in
this
room.
There's
going
to
be
significant
discussion
on
the
recovery
draft,
basically
a
one-hour
discussion
tomorrow
on
the
quick
working
group.
So
please
show
up
there
and
litigate
all
of
these
points
there
or
file
an
issue,
that's
even
better
I'm,
going
to
start
off
very
quickly
with
a
rundown
off.
Basically,
what
quick
is
again?
D
This
is,
for
those
of
you,
who've
been
hiding
under
a
rock
for
the
last
two
years.
No,
actually,
this
is
for
those
of
you
who
know
about
quake,
but
don't
know
the
details
of.
What's
going
on
in
quake,
so
don't
feel
too
bad
if,
if
you
find
yourself
being
educated
through
these
slides,
so
quick,
the
quick
packet
format
itself
is
different
than
PCBs.
So
it's
important
to
sort
of
change,
your
more
of
thinking,
a
little
bit
which
might
which
might
lend
credence
to
Michael's
comment
about.
D
Oh,
my
god,
it's
Michaels
I
forgot
that
about
alien
technology,
so
the
first
is
quick.
Has
two
types
of
headers:
a
quick
packet
has
two
types
of
headers,
there's
a
long
header
and
a
short
header.
Next,
the
long
header
looks
like
this.
It
has
many
bits
and
it
has
a
version
in
it.
It
has
connection
ID
is
connection
any
lengths,
I'm
not
going
to
go
to
the
details
here,
but
the
high
order
bit
is
that
this
header
is
used
during
the
handshake
and
handshake
only
it's
used
for
establishing
a
connection
after
the
handshake
next
line.
D
So
it's
basically
a
way
of
only
only
having
information
in
the
header,
that's
required
for
the
rest
of
the
communication,
I'm
not
going
to
talk
about
what
part
here
is
encrypted
and
what's
not,
because
that's
not
relevant
to
our
discussion
here
today,
but
moving
along
what
else
is
in
the
packet
frames,
if
you've
seen
a
CDP,
think
chunks
but
say
frames
next
slide,
there's
frames
are
of
different
types.
Frames
come
in
many
different
flavors
and
each
frame
has
its
own
set
of
fields.
Next,
and
here
are
all
the
different
frame
types
beat
them
all.
D
I
will
quiz
you
at
the
end.
It's
alright.
You
have
to
read
them
all
just
know
these
two
next
slide
the
AK
and
the
stream
frame.
You
don't
have
to
just
know
these
two.
These
two
are
most
relevant
again
for
our
discussion
today.
There
are
many
other
frames
again
understand
that
the
way
this
works
is
if
you're
familiar
the
city,
it's
quite
similar
to
that.
But
if
you
don't
it
doesn't
matter.
D
The
idea
here
is
that
most
of
the
control
information
in
the
protocol
is
basically
sent
via
these
different
types
of
frames
and
they
are
not
all
sent
in
the
packet
header.
The
packet
header
is
merely
a
way
to
find
the
destination
for
the
packet.
The
rest
of
it
is
basically
inside
of
control
frames,
so
the
stream
frame
is
the
thing
that
actually
carries
application
data.
The
ACK
frame
is
the
thing
that
carries
acknowledgment
information.
We
look
at
these
in
a
little
bit
more
detail
next
slide.
D
So
here's
what
a
screen
frame
looks
like
a
quick
connection.
First
off
just
for
context,
a
quick
connection
can
have
multiple
streams
and
every
stream
has
a
stream
ID
loss,
recovery,
congestion
control.
All
of
these
things
are
aggregated
across
the
entire
bundle
of
the
connection,
meaning
across
all
streams,
but
every
stream
has
and
within
that
every
stream
has
a
stream
ID
and
within
the
stream
ID.
A
particular
frame.
A
stream
frame
carries
a
piece
of
data
within
a
stream
ID,
and
that
piece
is
identified
by
the
offset
within
that
stream.
D
You
all
fail.
It's
about
some
of
you
bus,
it's
a
short
header.
Don't,
but
I
do
only.
There
are
two
types
of
headers.
The
first
bit
in
there
indicates
what
type
of
header
this
is
that
bit
being
one
means
it's
a
short
header
and
then
there
are
other
bits
in
there
which
I'm
not
going
to
go
into,
but
these
are
all
again
in
the
in
the
short
header,
but
one
thing
you'll
note
there
is
there's
a
packet
number.
D
The
packet
number
is
basically
again
we'll
talk
about
it
in
a
moment,
but
the
packet
number
is
the
thing
that
we
use
for
the
entire
packet
next
line
within
a
packet.
You
may
recall
that
there
are
different
frames
in
this
example.
There
are
two
stream
frames
and
one
ACK
frame.
Why
are
there
two
stream
frames?
You
may
ask
next
like
let's
say,
for
example,
that
here's
a
real
example
the
packet
number
is
56.
The
first
stream
frame
is
from
stream
ID
five.
D
It
has
an
offset
of
11
23
because
that's
a
nice
even
number
and
there's
data,
there's
application
data
on
stream,
ID
5
all
right,
but
this
packet
can
carry
more
streams
than
just
one
next
slide.
As
in
this
example,
you
can
see
that
this
packet
carries
not
just
data
from
stream
ID
5.
It
also
carries
data
from
stream,
ID
8.
This
is
lovely
if
you
want
to
tie
the
faith's
of
stream
5
&
8
together,
but
in
general
you
may
not
want
to
do
that.
D
D
So
this
is
important
a
little
bit.
If
you
want
to
understand
there
is
soft.
What
I'm
going
to
talk
about
with
loss
recovery
today
so
pay
attention
to
this
just
a
little
bit.
If
you
not
been
paying
attention
so
far,
we
have
the
largest
acknowledged,
and
so
the
Akram
has
basically
a
few
fields
right.
It
has
the
largest
acknowledged
frame
the
Rogers
knowledge
field,
I'm,
sorry,
and
that
basically
indicates
the
highest
packet
number
seen
so
far.
D
Note
that,
while
your
PCP
brain,
not
your
alien
brain,
your
TCP
brain
might
want
to
think
of
this
as
a
cumulative
ACT
point,
it
is,
in
fact
not.
This
is
not
the
highest
packet
number
seen
so
far
that
has
been
received
in
sequence.
This
is
merely
the
highest
packet
number
seen
so
far,
so
this
maps
to
the
highest
end
at
the
farthest
end
of
the
father,
sack
block
that
you
have
received
so
far.
This
is
the
highest
packet
number
I've.
D
D
D
There's
food
up
front
by
the
way,
all
right,
let's
move
on
to
the
next
slide:
that's
that's!
Basically
the
ACT
frame,
and
now
we
get
to
the
fun
part
of
this.
This
is
your
test.
I
am
NOT
going
to
go
into
the
details
here
actually,
but
this
is
just
showing
you
how
a
particular
packetization
how
a
particular
encoding
of
an
AK
frame
might
look
like,
and
these
details
are.
D
You
can
look
at
the
slides
later
to
get
a
sense
for
exactly
how
this
happens,
but
as
long
as
you
understand
the
different
fields
and
things
that
we
are
putting
in
the
AK
frame,
that's
adequate
for
our
discussion
today.
So
next
slide
and
there's
an
example
of
the
same
quick
packet
with
a
particular
act
frame
in
this
particular
example.
Okay,
so
again
remember
the
largest
act
that
is
the
highest
packet
number
I've
seen
so
far,
and
and
that's
about
it
next
slide
now.
D
Let's
assume
that
the
packet
we
were
just
you
know,
creating
back
at
56
is
in
fact
dropped
in
the
network.
Also,
let's
assume
that
stream
8
was
reset.
Why?
Because
it's
possible
quick
allows
you
to
reset
screams.
Remember
I
told
you
that
streams
are
things
within
a
connection.
You
can
reset
streams.
So
let's
assume
that
reset
stream
eight
was
reset,
meaning
that
the
application
says
I
do
not
care
about
the
data
on
the
stream
anymore,
to
send
it
or
to
receive
it
in
both
directions.
D
It's
done,
kill
it
and
let's
say
that
the
loss
addiction
dead
except
Bacchus
56
was
lost,
okay,
meaning
that
through
whatever
mechanism
and
then
we'll
talk
about
the
mechanisms
in
a
moment,
56
is
marked
as
lost
and
let's
say
to
the
last
packet
that
I
sent
out
was
packet.
Number
74,
okay,
I've
been
sending
packets
out,
56
was
sent
I
sent
out,
71
72,
73
74
and
then
I
detected
56
as
lost.
That's
where
we
are
stream.
D
Eight
has
also
been
canceled
next
slide,
so
going
back
to
looking
at
packed
at
56
there's
the
packet
header
should
be
retransmit.
Everything
here
come
on,
speak
up.
I
know
it's
post-lunch,
no
say
people,
nobody
says
yes,
this
is
a
room
that
knows
what
I'm
about
to
show
good
should
be
Bob.
Don't
pick
that
up
now
stream,
ID
5,
the
first
stream
frame
should
be
return.
Outs
with
that
yes
says
somebody
in
the
back
and
it's
Randall
should
be
retransmit
stream,
ID
8.
D
D
Confusion,
no,
we
don't
need
to.
We
may
want
to
send
a
new
AK
frame.
We
don't
read
ants
with
the
same
one
and
you
get
brownie
points.
Whoever
this
was
I,
don't
know
it
was
yeah
there
we
go.
Okay,
you
get
brownie
points,
you
get
a
strawberry,
and
what
do
we
do
so
next
slide?
So
we
don't
read
us
with
those
things
and
we
send
a
retransmission
in
an
in
packet
number
75,
not
56,
75.
D
It's
not
TCP,
it's
alien
technology.
So
the
point
here
is
that
we're
selling
a
new
packet
with
retransmitted
information
in
it
we
may
not
return
submit
everything.
We
may
in
fact
choose
to
not
retransmitting
anything
at
all.
If
stream
5
was
even
cancelled,
that's
nothing
to
retransmit,
you
detected
loss,
but
there's
nothing
to
eat,
transmitted,
right
and
I
can
add.
I
can
choose
not
to
retransmit
5
right
now,
either
we'll
get
to
that
in
a
second,
but
I
hope
this
general
gist
is
clear.
Next
slide,
so
loss
recovery.
It
looks
like
this.
D
The
packet
numbers
in
quick
basically
represent
transmission
order.
These
are
this
basically
the
premise
of
quicks
loss,
recovery
right
so
packets.
The
reply
packet
numbers
represent
transmission
order.
They
do
not,
however,
represent
the
order
in
which
the
receiver
expects
to
receive
them.
This
is
a
critical
difference.
D
This
has
nothing
to
do
with
the
packet
number
packet
numbers
and
quick
are
simply
moronically
increasing
packet
numbers.
They
do
not
occur
again,
and
this
is
in
fact
fundamental
to
the
security
of
the
protocol
itself,
but
we
won't
go
there.
The
point
here
is
that
packet
number
56
once
it's
dropped,
will
not
reappear
within
that
connection.
Okay,
that's
important,
and
finally,
there's
a
caveat
about
multiple
packet
number
spaces,
you're
in
connection
setup.
I'm
not
going
to
go
there,
so
packets
are
basically
containers,
they
carry
various
things.
Various
frames
and
packets
are
simply
containers.
D
It's
a
really
nice
thing
to
be
able
to
divorce
that
from
the
way
that
that
stream
data's
delivered
because
for
loss,
recovery,
I
really
only
care
about
packet,
the
the
what's
in
the
packet,
because
the
unit
of
drop
is
a
packet
and
the
unit
of
ordering
is
a
packet.
From
my
point
of
view,
transmission
ordering
and
that's
useful
for
me
as
a
sender
pair
in
mind
how
that
retransmissions
are
not
automatically
high
priority.
D
As
I
said
stream,
5
need
not
be
retransmitted
right
now,
even
though
I've
detected
it
has
lost
my
stream
priorities,
which
are
application
dependent,
might
dictate
that
stream.
5
is
the
lowest
priority
thing
right
now,
so
don't
use
any
condition
any
connection
bandwidth
for
sending
stream
5
data.
As
long
as
you
have
stream
10
data
to
send
so
it's
possible
that
I
don't
retransmit
right
now,
so
we
also
have
to
divorce
what
happens
to
the
congestion
window
and
to
loss
detection
from
what
is
in
fact
transmitted
when
loss
is
detected.
D
D
There's
a
deli
rack
timer
that
that
is
currently
recommended
at
25
milliseconds,
meaning
that
you
act
every
other
packet
that's
received.
That
is
a
bit
of
a
ordering
thing
here
where,
if
the
received
packet
is
not
the
largest
received
plus
one,
then
you
act
every
packet-
that's
not!
That's!
That's
recommended
again
based
on
just
standard
PCB
behavior.
D
So
there
is
an
expectation
of
ordering
here
to
some
extent,
but
a
receiver
is
also
free
to
process
more
acts
before
sending
more
packets
before
sending
it
back,
there's
no
strict
requirement
that
it
must
send
an
act
for
every
two
packets
that
are
received.
This
is
to
allow
receivers
and
send
us
in
fact,
as
well
to
experiment
with
this.
In
fact,
there's
been
there's
an
issue
open
on
trying
to
figure
out.
If
we
can't
make
this
something
that's
negotiated
between
the
sender
and
the
receiver
and
made
more
dynamic.
D
D
D
D
Thanks
yeah,
so
so
now
we
get
into
the
meat
of
this
conversation.
Fifteen
minutes
into
this
presentation,
because
that's
when
it
all
happens
right
crystal
quick
loss,
detection,
I'm
going
to
walk
through
sort
of
what
what
current
loss
detection
is
now
this
is
has
come
through
a
number
of
changes
to
the
draft
and
if
you
show
up
tomorrow
to
the
meeting,
you'll
hear
and
talk
about,
what's
changed
in
the
past
four
months
and
and
there's
a
fair
amount
of
change,
that's
happened
over
the
past
four
months.
D
What
I'm
gonna
describe
here
is
the
current
state
of
the
draft.
Okay.
This
is
draft
19,
so
the
weight
loss
direction
works
is
that
a
high-level
loss
detection
only
happens
when
an
act
frame
is
received
again,
that's
a
promise.
This
is
an
important
promise
and
you'll
see
one
in
a
moment,
meaning
that
when
we
receive
an
act
frame
that
nearly
acknowledges
a
packet,
then
we
use
two
different
kinds
of
thresholds:
packet
and
time
thresholds
to
detect
a
packet.
D
The
packet
that
was
sent
previously
as
lost
the
packet
threshold
means
that,
if
I
see
the
ordering
of
greater
than
equal
to
three
packets,
this
is
the
same
as
TCB's.
Three
do
pack
thing.
Basically,
if
I
see
a
packet
that
is
acknowledged,
that's
three
packets
ahead
of
where
this
missing
packet
is
then
we
mark
this
packet
has
lost
or
the
time
threshold.
D
This
is
similar
to
Rach
again,
where,
if
I
receive
one
packet
ahead,
at
least
one
packet
ahead
of
the
packet
I'm
looking
at
I'm
considering,
but
enough
time
has
passed
since
I
sent
the
packet
and
enough
time
here.
Is
this
particular
value,
then
I
mark
that
packet
has
lost
okay,
so
there's
time
based
and
packet
based
packet
threshold
based
in
time
threshold
based
loss
detection,
both
of
them
operating
at
the
same
time.
D
So
that's
how
loss
detection
works
and
I
say
this
is
our
lost
addiction
works
you'll,
see
that
this
is
broader
than
how
it's
done
in
TCP
in
a
moment
next
slide
when
no
acknowledgments
had
a
seat,
however,
so
that
that
works
well
and
fine
as
long
as
you're
racing
acknowledgments,
when
the
knackers
receipt
I
can
do
loss
detection,
but
what,
if
no
ICA's
receipt?
What
we
do
is
we
set
a
timer
called
a
probe
timeout,
a
probe
timer
and
when
the
probe
time,
what
happens
you
simply
send
packets
out
you
send
one
or
two
packets?
D
That's
what
a
sender
does
when
the
probe
timer
goes
off
and
the
probe
timer
is
set
like
so,
and
this
must
remind
people
of
the
RTO
computation.
That's
exactly
what
this
is
with
one
important
difference:
there's
no
minority!
Oh
please,
don't
come
to
the
mic
and
yell
at
me.
Yet
there's
no
minority!
Oh
there's
no
minority,
oh,
but
there
is
a
max
act
delay
the
max
act.
Delay
is
in
fact,
what
is
sent
to
me
by
the
peer
as
something
that
the
peer
says.
The
the
receiver
says.
D
This
is
the
maximum
amount
of
time
by
which
I
will
delay
an
acknowledgment,
and
so
the
PTO
includes
that,
but
it
removes
the
minority.
Oh,
this
is
very
much
in
line
sort
of
with,
with
the
reason
rationale
for
why
minority
Oh
was
even
there
in
the
first
place,
but
that's
the
proposal
right
now
and
Cory's.
J
D
You
that
was
nice.
The
the
pro
packet
can't
contain
pretty
much
anything.
We
don't.
We
don't
actually
say
you.
If
you
have
old
data
you
can.
You
can
send
old
data
if
you
don't
have
that
you
can
send
new
data,
but
we
we
suggest
that
you
send
new
data
before
going
off
to
selling
old
data,
because
we
don't
quite
know
what's
happened
yet.
D
The
point
here
is
that
PTO
basically
comes
from
it's
an
extension
of
PLP,
so
think
of
it
as
at
a
loss
probe
that
just
keeps
happening
again
and
again
and
again,
that's
one
way
of
thinking
about
this,
so
so
the
the
premise
here
is
el
temor
does
not
mean
packet
loss
again,
as
I
said,
packet
loss
is
detected
when
an
ACK
is
received
until
an
ACK
has
received.
We
don't
declare
anything
is
lost,
there's
an
exception,
but
that's
an
exception.
It's
not
the
common
case.
We
do
not
declare
anything
as
loss.
D
On
till
we
see
you
knock.
The
reason
this
is
useful
is
because
it
allows
us
to
not
have
to
deal
with
spurious
lis
marking
things
as
lost
when
we
actually
do
is
see
an
ACK.
So
the
idea
here
is
when
a
time
what
happens,
we
don't
do
anything,
the
congestion
window.
We
don't
do
anything
to
anything
at
all.
We
just
send
out
throw
packets
and,
and
when
an
ad
comes
back,
we
do
the
actual
detection.
D
Three
minutes
left
persistent
condition.
So
when
so,
so,
when
do
we
actually
collapse?
The
congestion
window
cause?
There
is
an
RTO
thing
in
TCP
right.
So
at
some
point
you
need
to
infer
that
this
persistent
congestion
in
the
network
and
the
way
we
do
that
here,
because
we
don't
we
in
TCP,
you
don't
wait
for
an
ACK
to
collapse,
your
rate,
so
in
this
case,
the
way
we
do
that
is
when
there's
basically
been
no
ax
coming
back
and
and
all
packets
are
basically
lost
over
a
long
enough
period
of
time.
D
We
do
not
receive
any
acknowledgments
over
a
long
enough
period
of
time.
Then
we
collapse
the
conditioner
window
to
the
Mencia.
The
long
enough
period
of
time
is
effectively
effectively
defined
just
directly
in
terms
of
time.
In
terms
of
the
amount
of
time
it
could
have
taken
for
an
RTO
to
fire
right.
So
the
the
draft
right
now
has
a
this.
This
expression
here
and
and
the
idea
is
to
with
the
default,
is
to
which
means
that
this
is
what
effectively
would
have
happened.
D
If
you
had
had
two
talos
probes,
followed
by
one
RTO,
that's
the
amount
of
time
that's
computed
in
this
expression,
and
then
we
say
if
there's
been
no
acts
received,
but
data
has
been
going
out
for
this
period
of
time.
Then
you
mark
the
connection
as
in
persistent
congestion
and
we
collapse
the
congestion
window
next
slide
I'm
not
going
to
go
into
the
details
of
this,
but
that's
the
lost
detection.
Part
next
slide,
there's
a
bunch
of
tooling
next
slide.
D
That's
does
active
work,
that's
happening
on
tooling
and
there's
a
whole
hour
on
this
in
TSV
area
to
discuss
tooling
this.
You
can
actually
use
to
look
at
loss,
recovery,
behavior
and
so
on
in
quick
connections
with
quick
implementations.
So
please
come
to
that
and
watch
this
and
that's
the
end
of
my
talk.
C
D
I
think
that's
the
recommendation
in
the
correct
current
in
the
current
recovery
draft
yeah
in
the
in
the
rag
draft
I'm
saying
we
could
have
a
similar
strategy
in
I,
so
we
I
think
we
have
something
that
lines
in
the
recovery
traffic
as
well.
Basically,
the
same
logic
being
that
you,
you
send
new
data
if
you
can.
Otherwise
you
send
old
data
and
in
the
previous
slide,
where
the
condition
windows
collapsed.
Yep,
persistent.
A
D
K
D
Yep,
the
next
next
next
next
yeah
that
one.
So
this
is
not
just
collapsed.
The
means
even
died
just
noted
that
we
should
probably
say
that
we
mark
everything
has
lost.
That
was
outstanding
right.
Everything
is
marked
as
sorry,
so
everything
that
was
outstanding
is
marked
lost
at
this
point.
I
can't
remember
that's
the
case.
Is
that
actually
done
on
the
draft?
Oh.
D
P
You
have
differences,
then
we
justify
them
at
least
right
and
and
you
you
went
into
the
setting
congestion
window
a
little
bit
at
the
end
of
the
talk.
So
that's
a
little
different
than
detecting
lost
part
and
I.
Think
setting
transition
window
would
come
into
play
by
whatever
congestion
control
is
in
control.
At
that
point
in
time,
so
right
now,
I
think
default
is
New
Reno
for
the
through
big
is
there
any
I
mean.
D
This
is
what
I
call
this
persistent
condition,
so
you
would
basically
call
in
to
be
VR,
for
example,
and
say
you
know
the
equivalent
of
a
timeout
happened,
but
but
the
the
interface
is
quite
pluggable
and-
and
there
is
an
API
in
the
draft,
if
you
want
to
use
that
implementation,
but
the
mechanisms
are
quite
divorced
from
each
other,
even
as
described
in
the
draft,
so
it
allows
you
to
use
other
controllers
Thanks.
So.
A
We
are
running
out
of
time,
and
so
thanks
a
lot
for
this
talk
as
and
as
mentioned.
So
the
intention
was
to
raise
awareness
and
to
get
feedback
from
that
community
and
I'm
sure
that
you
will
understand
where
in
other
working
groups,
you
can
contribute.
If
you
have
any
further
comments
and
is
that
we
are
unfortunately
out
of
time,
which
means
that
we
don't
have
enough
time
left
for
the
last
talk
that
we
had
on
the
agenda
if
time
permits.