►
From YouTube: IETF102-TSVWG-20180719-1810
Description
TSVWG meeting session at IETF102
2018/07/19 1810
https://datatracker.ietf.org/meeting/102/proceedings/
C
C
If
you
change
the
port
number,
you
have
to
recompute
checksums,
that's
also
something
which
is
pretty
heavy
for
a
CDP.
So
this
document
specifies
a
method
of
doing
net
without
changing
the
port
numbers,
it
uses
a
specific
protocol,
specific
field,
the
verification
tag
as
a
sort
of
connection
identifier-
and
this
is
how
you
handle
collisions-
that
when
you
have
an
incoming
packet,
you
need
to
figure
out
which
node
of
the
private
network
you
send
the
packet
to.
If
you
want
to
see
the
details,
you
can
read
the
document
there.
Are
it
specified?
C
C
We
have
finally
addressed
our
comes
from
Karen.
We
got
some
time
ago.
I
went
through
the
document
clarified
a
lot
of
things,
looked
at
consistency
figure
out
that
we
were
using
not
the
IP
addresses
we
should
use
for
examples
and
drafts
I
changed
that
there
was
one
comment
raised
some
time
ago
that
this
document
should
be
split
in,
let's
say
about
two
parts,
one
part
to
be
read
by
implementers
for
net
boxes.
One
thought
to
be
read
by
implementers
of
endpoints
and
I:
didn't
do
it
because
I.
C
C
The
the
the
authors
consider
this
document
finish,
except
for
gory
reddit
I,
think
two
days
ago
and
provided
some
editorial
comments.
I
would
say
so.
He
made
a
pretty
good
suggestion
for
increasing
readability,
increasing
consistency
with
documents
from
behave,
and
he
also
suggested
to
not
really
split
the
documents
into
two
parts
but
split
specific
sections
into
two
subsections,
so
that
you
have
subsections
which
are
concentrate
on
the
end
points
and
the
other
subjects,
subjects
and
dust
the
counterpart
for
the
middle
box.
C
C
D
C
C
C
C
F
C
G
Think
the
documents
purpose
is
to
describe
here
not
implementers,
here's
how
to
support
their
CTP
and
here
SCTP
folks,
here's
some
considerations
for
you
as
well,
it's
not
to
say,
oh
and
by
the
way,
if
you
don't,
if
you
don't
have
this
net
support,
all
these
other
things
would
happen.
That
sounds
more
something
that
would
come
out
of
behave
or
or
some
other
document,
but
it
would
be.
Another
document
I
would
think
family.
This
is
behave
this
one,
the.
E
D
B
B
C
H
And
I
think
there
may
be
Nats
I
mean
there
are
mic.
There
are
nuts
that
there
are
nuts
that
will
just
map
the
IP
address
might
understand
the
protocol,
but
it
has
to
it's
as
off.
I
see
this
product
because
you
saw
one
of
your
inside
hosts
initiate
something
on
this
protocol
that
you
know
nothing
about.
Maybe
I
should
send
all
the
packets.
H
C
Point
is
what
the
end
point
basically
high
level.
What
the
end
point
does
if
he
supports
the
snap
thing
he
avoids
putting
IP
addresses
in
the
SVP
in
the
IP
payload,
and
that's
a
good
thing
also
for
nuts,
not
supporting
this.
So
it
is,
it
can
work
to
the
amount
of
what
this
net
implementation
has
done.
If
it
is
that's
a
CDP
packet
through
that's
fine,
if
it
how
it
behaves
in
case
of
two
notes
and
the
private
network
talk
to
the
same
server
I
have
no
idea.
It
can't
make
it
as
English.
C
B
Fair
enough
right,
and
why
are
we
having
such
a
complicated
conversation?
The
answer
is
because
we
need
people
who
understand
nuts
to
review
this
document
so
have
we
got
any
people
lined
up
as
reviewers
for
this?
Would
anyone
like
to
review
it?
Would
anyone
be
kind
of
quest
into
reviewing
it?
If
we
bribe
you
in
some
way,
I
mean
the
docking
will
be
clearer.
Might
seem
like
sense.
Yes,
thank
you.
E
B
Reviewers
and
we
will
try
and
resurrect
parts
of
the
behave
expertise
from
the
IETF
to
go
with
that
and
what
submit
the
next
one,
and
we
will
then
give
it
a
working
group
chair
review
and
if
that
looks
right,
we
will
intend
to
go
to
working
group
last
call
and
solicit
reviews.
Perfect
I
suspect,
that's
a
tricky
job,
but
we
will
start
it
great.
C
C
C
The
changes
include
next
making
only
giving
references
to
the
Aurora's
filed
in
the
ietf
Rada
system,
so
I
went
through
all
the
e
Radha's,
which
were
submitted
and
considered
which
mail,
which
one
were
correct.
They
have
corresponding
text
in
this
document,
but
not
the
other
way
around.
So
not
every
issue
we
have
here
is
an
errata
being
submitted
to
the
to
this
errata
system.
C
The
references
are
there.
There
are
some.
There
are
some
Radha's
on
the
website
which
are
not
covered
by
this.
These
are
the
e
Radha's
which
are
wrong
or
have
been
rejected
or
whatever
titles
were
fixed.
Some
text
verifications
came
up.
Someone
figure
out
that
we
missed
something
in
the
ICMP
text,
and
so
things
have
been
cleaned
up.
C
The
status
of
the
isg
evaluation
is
one.
Yes,
that's
normally
the
isg
member
who
is
responsible
for
this
three
times.
No
objection,
no
records
and
abstain
ate
once
has
a
disgust.
That
is
a
guy
who
that's
Eric,
Ries
Cola,
who
has
had
initially
also
an
abstain,
but
changed
it
to
a
disgust
because
of
the
high
number
of
abstain.
C
You
have
sections
each
section,
describe
an
issue
and
says
which
text
in
the
ROC
gets
replaced
by
a
different
text,
and
this
is
sorted
by
the
time
we
found
the
issues.
So
whenever
a
new
issue
came
up,
a
new
section
was
added
to
the
end
and
the
issue
was
resolved.
So
one
of
the
Commons
was
what
happens.
Is
there
is
text
which
is
touched
multiple
times,
so
the
suggestion
was
to
add
for
each
for
each
section
sentence
saying
the
text
changes
here
are
final.
C
B
E
B
B
C
Plan
was
until
this
morning
to
submit
our
a
0-0
version
of
the
base
document
being
identical
to
the
RFC
version,
so
that
we
have
all
the
changes
in
the
tracker
we
figure
out
that
the
RFC
was
processed
in
an
Roth,
so
we
submitted
XML.
The
ROC
editor
changed
to
an
Roth
and
that's
the
reason
why
some
of
the
tables
went
wrong.
C
B
C
B
B
A
B
B
J
Hopefully
would
fade
away
more
quickly,
but
it
won't
ok.
I
I
won't
go
into
any
more
detail
on
that
and
to
say:
there's
a
network
part
a
host
part
protocol
in
between
based
on
the
ecn
field
for
the
two
to
know
what
which
packets
are
which
right
so
drafts.
I'm
going
to
talk
about-
or
this
is
an
outline
of
the
whole
talk-
drafts
I'm
going
to
talk
about
today-
not
going
to
talk
about
the
architecture
which
is
stable
and
hasn't
changed.
J
There's
a
number
of
changes
in
each
of
these
draft
I'm
sort
of
picking
out
one
in
each
that's
the
major
technical
change
and
and
in
that
one
it's
we're
getting
towards
the
corner
cases
in
that
one,
and
this
is
handling
unexpected
them
packets,
in
the
queue
which
I
sure
will
talk
about
briefly
and
then
there's
a
draft.
That's
not
a
working
group
draft.
Currently
you
had
it
on
the
chairs
introduction
as
new,
but
it's
actually
it
was
new.
Last
time
I
presented
that
last
time.
J
Some
are
you
protecting
presenting
updates
this
time,
but
I
will
try
and
give
an
outline
of
it
as
well
for
those
that
missed
it
last
time
and
then
longer
give
a
couple
of
slides
on
just
a
general
status
of
the
l4s,
a
sort
of
state
of
the
nation
on
on
LS
and
end
up
with
next
steps
so
identifying
the
modified,
EC
and
semantics
using
the
ECT
one
code
point
this:
the
main
change
is
within
this
draft.
We
have
what
a
court
would
have
some
parochially
called
the
tcp
prog
requirements.
J
I,
don't
think
we
yeah.
We
do
mention
that
term
in
that
there,
but
it's
not
just
for
TCP
s
for
all
transports
and
they're,
essentially
requirements
that
the
congestion
control
must
satisfy
in
order
to
have
the
right
to
use
this
code
point
to
send
into
the
network.
Essentially
what
it
means
is
that
this
code
point
then
says
this
congestion.
Control
of
this
transport
is
acting
in
a
scalable
way
and
it
won't
cause
a
queue
and
so
on.
J
So
it's
entitled
to
go
into
that
very
low
latency
queue
and
we've
added
a
fifth
requirement,
which
seems
like
it's
nothing
to
do
with
any
of
the
others.
So
I've
got
to
explain
that
and
that's
what
the
next
few
slides
do.
I
won't
go
into
the
other
four
requirements,
not
enough
time,
but
the
fifth
requirement.
Paraphrasing
this
isn't
the
wording
in
the
in
the
text.
It
says
a
scalable
congestion
control
must
detect
loss
by
counting
in
units
of
time,
not
in
units
of
packets.
J
Now
this
is
like
the
way
TCP
rack
works
and
it's
not
like
the
way
TTP's
old,
3do
Packer,
all
worked
seems
like
TTP
Iraqis
take
being
picked
up
very
fast.
If
you
don't
know
what
teach
me
RAC
is
it's
about
detecting
loss
in
units
of
time,
not
packets
and
I'll-
come
on
to
that
on
the
next
slide.
But
the
reason
why
this
is
important
for
l4s
is
that
it
then
means
that
link
technologies
that
support
help
for
us
can
know
that
all
senders
sending
packets
into
this
queue
into
their
queue
in
the
network.
J
They
can
look
at
this
ECT
one
Co
point
and
they
know.
That
means
you
support
detecting
loss
by
time,
and
it
means
you
can
relax
your
resequencing
and
reordering
your
requirements
in
the
network,
and
that
means
you
can
remove
head-of-line
blocking
if
you're
doing
things
like
link
binary
transmissions.
So
I'm
going
to
explain
all
this,
but
that's
that's
the
overview,
so
quick
background
on
RAC
loss
normally
is
only
at
the
transport
at
the
intern
layer.
You
can't
actually
know
there
was
a
loss
in
the
network.
J
You
can
only
deem
that
you
think
there
might
have
been
a
loss,
because
packets
are
arriving
and
there's
been
a
hole
because
the
packet
might
arrive
later
now,
classic
TCP
uses
a
three
new
Peck
or
all
TTP
rack
uses
a
fraction
of
the
round-trip
time
term,
the
reordering
window,
so
arms,
a
timer
when
the
next
packet
arrives
after
the
gap
and
then,
if,
after
that,
time
back
in
hasn't
appeared
it
it
deems
it
lost.
I've
called
that
epsilon,
although
it's
not
called
that
in
there
they're
draft,
because
you
can't
write
that
in
S
key.
J
J
That's
before
the
acts
of
those
packets
arrive,
you've
done
a
retransmission,
and
then
you
discover
you
shouldn't
have
done.
However,
of
course
it
will
take
longer
to
repair
genuine
losses.
Where
longer
is
one
plus
epsilon
times
the
round-trip
time,
because
it
always
takes
a
round-trip
time,
at
least
to
repair
a
loss
because
you've
got
to
discover
it
and
then
you've
gotta
wait
this
extra
Epsilon,
so
you
don't
want
to
pile
on
to
be
too
long.
Otherwise,
your
losses
take
too
long
to
repair,
but
you
don't
want
to
be
too
short.
J
E
J
All
the
details
are
in
the
right
draft,
but
that's
Rach
in
a
nutshell.
So
what
are
the
benefits
of
rack
two
links
and
that's
why
we're
saying
this
month
is
a
must
for
l4s
because
they
turn
out
their
benefits
for
the
applications
and
the
end
systems
right.
So,
as
well
as
the
benefits
at
layer
4,
which
are
already
articulated
in
the
right
draft,
it
gives
you
performance
improvements
at
the
link,
layer
and
I'm
going
to
try
and
explain
it
like
this.
J
If
you
imagine
that
I'm
showing
two
flows
here,
one
going
slowly
one
going
fast,
so
there's
12
packets
per
round-trip
time
is
that
one
with
the
long
packets,
so
there's
12
packets
in
that
round
trip
length,
and
then
it's
either
on
some
faster
link
or
at
some
future
date.
Things
are
getting
faster,
you're
sending
more
packets,
96
packets
in
this
round-trip
time,
just
as
an
example
so
8
times
faster.
J
Now,
if
you
have
to
keep
packets
in
order
on
that
link
within
three,
you
know
you
mustn't
reorder
in
greater
than
three
packets,
because
otherwise
TCP
will
fire
consider,
there's
being
a
loss
in
the
fire,
a
retransmission
when
the
packets
are
going
slowly,
you've
got
6
milliseconds.
You
know
that
you've
got
3:12
of
that
round-trip
time
when
the
Packers
are
going
fast.
J
You
know
eight
times
faster,
yeah
one-eighth
of
the
time
and
as
things
get
fast
and
faster,
you've
got
to
keep
ordering
within
microseconds
and
then
nanoseconds
as
things
get
faster
in
this
case
750
microseconds.
Now
you
might
think.
Well.
Why
does
that
matter?
Because
if
they
come
in
in
order
they'll
go
out
in
order
and
the
faster
they
go,
they'll
still
be
in
order,
but
it's
because
most
links
in
order
to
get
faster
parallelized.
J
They
they
use
multiple
channels,
I
mean
particularly
over
radio
links
and
then
put
it
back
together
again
afterwards,
and
so
the
the
tighter
you
make
that
timing
of
the
the
of
the
re
resequencing
after
parallelization
that
the
tighter
the
clocks
have
to
be
everything
is
much
more
difficult
to
make
fast.
So
it
would
be
really
useful
if
this
thing
scaled
in
such
a
way
that,
as
everything
gets
faster,
you've
still
got
the
same
amount
of
time
rather
than
the
time
getting
squeezed
and
squeezed
as
things
get
faster
now.
J
J
You
quite
often
have
a
link
layer,
retransmission
going
on
to
heal
the
the
radio
losses,
and
then
you
have
a
resequencing
buffer
at
the
receiver
end
of
the
link.
So
if
there's
a
retransmit
link
layer,
retransmission
going
on
so
you've
got
a
hole
in
you've,
set
some
link
layer,
sequence
numbers:
you've
got
a
hole
in
that
sequence,
you
hold
the
packets
behind
that
hole
until
you
do
the
repair
and
then
the
you
let
the
whole
buffer
go
out
once
you've
repaired.
J
It
notice
that
delays
everything
and,
in
particular
it
delays
all
the
packets
you're
holding
which
may
not
even
be
for
the
same
flow,
because
this
is
a
link
level
or
it
may
not
be
for
the
same
stream.
You
know
if
it's
quick
or
something
like
that
and
it's
very
difficult
to
know
whether
they
are
or
not
so
you're.
You've
got
your
head
of
line
blocking
and.
J
So
with
making
this
rule
that
you
have
a
certain
amount
of
time
in
which
reordering
is
allowed,
it
means
that
that
resetting
buffer
can
actually
just
be
removed
completely
you
just
forward
stuff,
as
you
get
it.
If
there's
any
holes,
you
do
the
retransmission.
You
know
afterwards
lazily
that
comes
that
retransmission
comes
out
and
once
it
you've
eventually
got
the
packet
through
you,
you
send
it
off
out
of
order,
but
you've
got
plenty
of
time
to
do
it.
I
mean
not
plenty
of
time.
You
don't
you
don't.
J
You
know
now
we're
talking
of
sub-millisecond,
maybe
or
millisecond
or
so,
but
it's
at
least
it's
not
getting
smaller
over
time.
It
scales
in
other
it
weight.
It
scales
all
to
one
right
and
finally,
Gori
one
more
on
this.
So
why
are
we
making
this
a
must
in
in
to
use
ECT
one
a
scalable
congestion?
Oh,
must
not
to
take
loss
in
units
of
packets.
B
B
B
Said
I,
don't
believe
the
30
years
right
and
the
second
thing
just
as
a
working
group,
chair
Comet,
and
when
you
said
why
the
other
must
have
a
must.
Not
the
working
group
as
a
whole
has
to
buy
in
to
this
as
being
the
way
for
because
it
has
implications.
It
means
RTP
based
protocols
as
currently
written
will
not
work,
maybe
not
well.
B
J
J
J
It
says
it
must
not
detect
loss
in
units
of
packets
means
that
you've
got
to
use
time
right
from
the
start,
whereas
rack
uses
the
3G
back
rule
from
the
start,
but
I
had
a
session
with
you
Chung
before
he
went,
went
back
and
he's
now
understood
that
he's
got
to
try
and
put
that
end
units
of
time.
It
I
mean
it's
not
so
important
for
TCP
that,
as
it
is
for
this
case,
but
if
he
doesn't
over
TCP,
then
it
will
be
30
years.
L
K
J
J
B
J
J
I'll
get
straight
to
I'll,
skip
this
and
say
simply
that
we've
updated
the
diffserv
draft
and
the
main
purpose
of
this
draft
was
to
introduce
the
idea
of
using
expedited
forwarding
in
the
dual
queue
as
well
as
the
ecn
for
for
what
you
might
call
non
queue
building
traffic
that
is
not
responsive
things
like
DNS
or
sort
of
game.
Sync
packet
stuff
like
that.
J
So
that's
the
main
reason
for
this
draft
and
I'm
thinking
from
feedback
I've
heard
from
David
that
maybe
all
the
other
stuff
in
this
draft
could
be
could
be
either
die
or
be
separated
out
or
something
cuz.
It
seems.
I
wasn't
particularly
interested
in
all
the
other
stuff.
I
thought
it
was
what
David
one
means
right
and
that
very
said
he's
not
interested
in
it.
So.
J
C
J
J
Essentially,
very
similar
results.
The
ones
on
the
Left
were
for
a
DSL
link,
ones
on
the
right
are
on
an
Ethernet
link,
but
essentially
it
shows
that
the
Alfre,
the
offerors
implementation
in
both
must
have
been
that
all
the
experiments
must
be
invalid
because
they
both
give
the
same
result,
which
is
good
and
there's
a
number
of
draft
updates.
We've
heard
about
most
of
these
we've
heard
about
quick
ecn,
we've
heard
about
everything
I
wanted
to
get
please
just
the
last
slide,
which
is
next
steps
so.
J
That's
what
I
want
to
get
to
the
next
steps?
Yeah.
Can
you
do
the
last
slide
Yeah
right,
so
it's
I
can't
say
unfortunately,
there's
a
group
of
people
working
on
it,
and
so,
when
I
do
all
the
updates
to
the
draft
I'm
acting
as
editor
on
their
behalf,
so
it
looks
like
I'm
the
only
one
doing
it,
but
that
situation
is
going
to
change
I
believe
very
soon
and
very
soon
means
within
weeks.
The.
J
So
that's
what
I'm
saying
we
should
now
be
able
to
leave
the
holding
pattern
and
indeed
want
to
leave
the
holding
pattern,
and
so
I
would
I'm
going
to
tidy
up
piecemeal
changes.
But
now
I
like
to
start
getting,
reviews
of
the
whole
thing
and
I
should
I
shall
be
getting
them.
The
people
who
have
been
working
on
it
to
give
reviews
and
I'm
encouraging
them
to
put
it
on
the
on
the
on
the
list.
I
have
realized
that
I
will
now
try
and
try
and
get
them
to
give
the
comments
they're.
B
J
B
B
A
A
B
M
A
No,
not
thanks
all
right.
Let's
do
I'd
be
curious
and
come
back
this
and
see
you
see
if
it
gets
fixed.
So
I
want
to
say
a
little
bit
about
why
up
about
why
this
draft
is
here.
I
ain't
Jeff
has
an
interesting
experience
in
trying
to
do
resource
reservation.
Protocol
RSVP
has
wound
up
being
used
only
as
a
traffic
engineering
protocol
NSS
has
failed
beta
failure
and
anybody
trying
again
with
either
those
protocol
designs,
is
going
to
run
into
all
kinds
of
interesting
and
not
particularly
nice
path,
diversity
issues.
E
The
original
draft
was
written
in
six-man
and
presented
in
ITF
100,
so
what
this
is
a
rewritten
of
the
orange?
No
one
thanks
to
the
ETS
vwg
chairs,
we
address
a
lot
of
comments
we
have
received
and
they
strapped
we
focus
more
on
the
resource
reservation
part.
So
what
we
are
trying
to
do.
We
want
to
have
a
new
reservation
protocol
in
ipv6
through
in-band
signaling,
using
ipv6
extension
header,
the
design
principles
we
have.
We
wanted
to
be
backward
compatible,
which
means
you
know
it
can
co-exist
with
regular
traffic.
E
The
service
we
are
providing
here
is
meant
to
be
used
by
applications
who
have
the
bandwidth
hype
and
this
low
latency
requirement.
It's
now
to
be,
it's
not
meant
to
be
used
by
our
packet
of
flows
and
right
now
we
are
targeting
for
only
when
service
domain,
so
how
it
works,
we're
using
ipv6
extension
headers.
If
you
look
at
your
network,
it
doesn't
need
to
you,
don't
need
to
config
this
header
or
the
header
doesn't
need
to
be
professed
a
purse
S
pay
every
single
router
along
the
path.
You
only
need
to
have
it.
E
The
resource
reservation
on
the
router,
where
the
congestion
might
happen,
your
network
choking
points.
So
what
we
do
when
we
get
the
extension
headers,
we
make
the
duration.
The
queuing
reservation
is
processed
by
the
NP.
You
now
tu
casa,
mi
signaling.
We
wanted
to
be
distributed,
Li
processed
and
that's
how
we
achieve
better
stability.
E
We
have
different
granulite
granularities
for
flows.
The
fine
finest
we
can
do
is
really
flow
level.
That's
like
the
five
different
tuples.
You
can
use
to
figure
out
what
the
flow
is
and
when
we
also
have
transfer
level
pretty
much
different
protocols,
the
kernel
kind
of
level
that's
based
on
the
South
destination
address,
and
we
also
have
the
different
serve
level.
That's
like
packet.
The
flow
are
defined
by
different
DHCP
values,
so
on
stability
and
performance
issues,
and
this
so
far
I
think
that's
their
comments.
E
We
got
the
most
and
this
is
a
classic
question
in
all
these
kind
of
proposals,
so
even
for
in-band
signaling
to
to
on
cue
s
to
resolve
the
u.s.
issue
is
not
it
was.
It
has
been
proposed
before
and
it
failed
because
of
most
cases,
because
I
was
developing
issue
on
the
way
we
are
doing
it.
Now
we
use
distributed
processing
because
of
modern
hardware.
E
So
just
we
have
an
example
here
see,
so
we
have
an
example
here.
This
is
a
really
large
number
stability.
Let's
say
if
we're
the
largest
part
right
now,
it's
like
400,
and
that
has
one
and
Hugh,
and
we,
let's
assume,
like
maybe
half-
of
the
bandhas,
to
have
a
racist
reservation.
So
that's
pretty
much
that's
200
gig
and,
let's
assume
proof
low.
We
have
proof
low
level
reservation
then
prefer
one
flow
inside
a
hundred
Mac.
Then
we
will
need
2000
cues
for
this.
My
interface
and
currently
the
the
modern
hardware,
can
do
this.
E
It
can
provide
more
than
2000
kills
and
if
we
look
at
reality
with
Yura
hell
for
real
applications
like
network
based
AR,
we
are,
the
bandwidth
requirement
is
larger
than
a
hundred
Mac
and
we
have
less
number
of
flows.
So
that
means
in
reality
the
stability
issue
is
not
eyes
were
sets
the
example
I
have
here.
E
So
as
I
see
you
soon,
the
SCP
as
an
example,
so
they
said
it
shows
how
the
classic
these
diffserv
works.
You
will
have
multi.
Usually
we
have
this.
You
know
this
is
an
example.
So
we
have.
This
term.
Is
reserved
for
a
certain
and
the
SCP
value,
then
the
queuing
is
sort
of
you
know
you
have
the
same
configuration
or
even
differently.
It's
still
prevent
config
and
every
router.
So,
no
matter
what
your
flow
is
like
here,
you
have
this.
E
This
two
flows
for
this:
the
SCP
value
and
later
on
this
outer,
you
have
more
flows,
but
you
still
have
the
same
queuing
capacity,
but
with
the
new
proposal,
the
queuing
and
scheduling
a
dynamically
change
based
on
the
number
of
flows
you
have
in
this
DHCP
class
the
use
cases
for
this
one,
because
we
are
really
providing
the
burn
waste.
A
high
bandwidth,
guaranteed
low
latency
service.
So
we
have
used
cases
internet
I'm
not
going
to
go
to
to
talk
about
a
lot
of
details
about
this
one.
E
So,
basically
we
can
use
this
to
interconnect
PSN
network
and
though
we
have
different
ways
to
make
actions,
but
that's
more
like
a
shouting
question
instead
of
resource
reservation
question
we
can
also
use
it
in
the
question.
Technology
has
trying
to
work
out
so
just
to
make
sure
you.
You
have
guaranteed
resources.
E
B
E
B
B
E
B
E
B
B
F
E
We
do
have
some
basic
mechanism
to
to
authentication,
so
you
have
to
pass
that
to
in
order
to
make
a
recess
reservation
and.
E
J
Right
I
would
like
to
see
in
this
draft.
I
read
the
first
version.
I
haven't
read
the
second
I'd
like
to
see
much
more
time
spent
on
understanding
why
this
didn't
happen
before,
rather
than
just
assuming
its
scalability,
because
I
don't
think
it
was,
and
even
if
it
was,
there
were
other
reasons.
So.
K
B
E
D
E
J
Need
to
look
at
all
the
reasons,
not
just
scalability.
Let
me
give
you
one
all
right.
The
the
I
I
was
heavily
involved
in
the
PCM
work
to
make
RSVP
scalable,
for
instance,
right
and
we
deployed
in
the
company.
I
was
working
for
at
the
time
British
Telecom
and
we
deployed
a
reservation
system
for
cars
for
use
by
ISPs,
not
for
end-users.
J
J
J
No,
no,
it
wasn't
a
pitfall.
It
just
wasn't
necessary
because
had
grown
to
the
point
where
all
the
use
cases
for
it
could
be
catered
for,
and
so
what
we
think
of
as
a
large
amount
of
bandwidth.
Now,
by
the
time
you
get
this
through
standards
and
implemented,
you
will
have
so
much
bandwidth
for
the
things
you
think
a
large.
At
the
moment
they
will
be
tiny
and
that's
the
problem.
J
But
there
are
other
reasons
as
well
and
the
the
RSVP
RFC
that
says
applicability.
You
know
what
went
wrong
with.
It
is
only
some
of
them
and
it
really
pays
to
look.
You
know.
Do
a
survey,
people
like
many
people
who
have
history
in
this.
There
are
lots
of
us,
we've
all
been
burned
by
it
and
just
try
and
find
all
the
reasons,
because
it's
not
just
scalability
so.
E
J
E
J
M
I'm,
basically,
thinking
about
directly
responding
to
Bob's
question,
which
was
what
what
was
the
issues
with
the
previous
time.
This
was
proposed.
It
was
proposed
by
Larry
Roberts,
who
you
may
remember,
was
the
program
manager
that
originally
funded
the
ARPANET,
and
he
was
starting
a
new
company
and
the
algorithm
was
specific
to
his
company
and
that
was
kind
of
a
big
deal.
People
didn't
like
that.
E
M
B
B
E
B
E
B
You
and
two
things
that
might
help
in
the
ight
contacts
are,
if
you
get
multiple
equipment,
vendors
to
actually
talk
to
you
and
agree
on
something
that
would
be
huge,
I
mean
even
if
it's
just
kind
of
the
whole
idea,
because
individual
companies
coming
to
do
this.
I'm,
not
so
well
received
here
in
general,
because
we're
talking
about
interoperability,
yeah,.
E
B
A
Now
this
looks
like
a
reservation
protocol.
There
was
some
related
congestion
control
ideas
that
would
have
gone
to
ICC,
RJ
I
think
they
ran
out
of
time.
I
think
they
ran
out
of
time
in
London
I
at
least
had
viewed
this
as
a
reservation
protocol
as
opposed,
and
not
not
a
congestion
control
management
protocol.
A
I
A
B
I
This
is
summary
of
the
state
of
UDP
options.
Next
slide,
please,
okay,
so
a
bunch
of
items
were
updated.
I
really
appreciate
everybody's
input
into
this
document,
and
it
was
a
way
to
find
all
these
bugs
I'll
try
my
best
in
the
last
pass
to
catch
most
of
them.
The
ACS
calculation
was
buggy
and
it
should
be
fixed
at
this
point.
I
I
In
the
case
when
the
right
area
is
less
than
4
bytes
everything
just
slides
forward
and
we
correct
the
description
of
UDP
light,
we
augmented
the
list
of
required
options,
reordered
them
I
apologize
if
that
affects
people
implementing
things,
but
at
some
point
anybody
wants
to
be
interoperable
'ti
to
just
use
the
latest
version
and
pick
the
right
numbers
in
there
start
work.
There's
a
reason.
We're
trying
to
say
that
ok,
everything,
that's
below
a
certain
number
is
required.
I
White
means
is
either
and
that's
because
if
you
don't
put
light
there,
then
light
+
flag
doesn't
work
anymore.
Mss
uses
is
used
for
PLP
MTU
D,
that's
Cory's.
Draft
frag
is
very
important
for
DNS
sex.
So
all
those
are
included
right
now
we
don't
include
time
and
the
reason
for
that
is
the
time
itself
may
not
be
available
as
much
as
it's
you
know,
encouraged
for
even
TCP
there's
no
requirement
that
nodes
actually
know
what
time
it
is
when
they
do
anything
time
a
day.
I
I
N
So
Tom
Jones
here
just
on
the
the
hosts
not
having
time
available
and
while
they
might
not
have
a
view
of
real
time,
they've
been
quite
a
few
changes
for
TTP
x
times
recently
in
Linux
and
FreeBSD
to
disconnect
them
from
a
notion
of
real
time
and
randomize.
The
start
point,
and
so,
if
hosts
have
a
notion
of
and
an
increasing
clock,
sure
that
they
can
use
the
time
option.
I
Well,
look
into
that
we
can.
The
one
of
the
one
of
the
other
reasons
is
that
we
don't
want
to
put
a
requirement
inside
UDP
that
will
put
a
mandate
for
something
in
some
sense
outside
UDP
and
it's
unlikely
somebody
would
implement
a
clock
just
inside
UDP.
So
if
the
system
itself
doesn't
have
a
monotonically
non-decreasing
value
somewhere
to
expect
that
it's
part
of
the
protocol
might
might
or
may
not
make
sense
and.
I
Have
to
be
possible
in
the
broad
sense
that
there's
a
generic
format
for
any
option.
That's
the
kind
in
length,
except
with
the,
with
a
small
number
of
exceptions
that
are
already
fixed
at
the
time
of
the
definition
of
this
spec,
and
so
new
options
are
always
possible
in
the
broad
sense
of
knowing
whether
to
skip
over
them
or
not
whether
or
not
to
participate
in
the
option.
I
That's
a
question,
so
we
have
to
go
through
that
to
make
sure
if
that's
possible,
like
I,
said
I
I,
don't
if
we
think
that's
something
we
want
to
add,
but
I
want
to
be
careful
about
making
or
it
a
hard
requirement,
because
even
in
TCP,
as
far
as
I
can
tell
it's
not
a
hard
requirement
that
you
have
that
count.
It's
it's
it's
recommended,
but
it's.
If
you
don't
support
it,
you
don't
support
it.
I
I
Okay,
so
on
the
next
slide,
other
proposals
having
to
do
with
the
TLV
time
type
length
value
for
all
options
versus
using
a
fixed
header
plus
until
the
extensions
thing
is
that
the
deal
the
the
the
fixed
header
will
not
avoid
the
need
to
parse
things.
The
only
way
we
can
avoid
the
parsing
things
is
to
get
rid
of
light
or
very
light
in
the
pics
header.
I
In
that
sense,
one
of
the
questions
have
to
do
with
whether
making
OCS
mandatory
and
first
at
most
it
seems
like
a
byte,
and
it
would
be
more
work
to
move
frag
plus
light,
because
now
that's
four
bytes
and
all
of
a
sudden
it
would
be
six
and
I'm
just
not
sure,
that's
a
win,
and
if
you
make
it
mandatory,
then
we
also
have
to
handle
the
case
where
it's
zero.
Because
just
you
know,
maybe
people
don't
want
to
use
the
option
that
you
used
to
checksum
right
now.
I
It's
optional,
so
I
have
not
seen
a
convincing
case
for
putting
that
in
as
mandatory
the
EO
l
versus
option
length
field,
the
length
field
doesn't
avoid
parsing
again
because
again,
the
light
could
require
reordering
before
computation
on
things
and
the
length
cost
two
bytes
more,
whereas
the
EOL
in
most
cases
packets,
will
not
have
any
oil
at
all.
The
EO
l
is
only
there
if,
for
some
reason,
we
find
a
way
or
find
a
need
to
have
space
beyond
the
option
that
is
not
covered
by
the
options.
I
I
I
understand
that
if
it's,
you
know
if
it's
16
bits,
there's
less
false
positives,
less
false
negatives
of
things,
it's
exactly
the
same
algorithm,
either
way,
except
for
about
two
codes
at
the
end
folding
over
and
deal
with
the
carry,
but
otherwise
it's
the
same
algorithms,
it's
IP
checksum
via
checksum,
and
then
you
either
stay
there
or
you
fold
it
over
and
stick
it
in
there.
If
you
fold
it
over
there's
a
possibility
that
you
won't
find
an
error
when
an
error
did
exist
or
we
might
not
have
this
fault
have
this
detection.
I
That
would
happen
if
other
people
happen
to
be
using
this
area
for
something
that
isn't
the
option
area,
but
over
over
time
a
given
stream
of
UDP
packets.
It's
gonna
trip
up
either
way
so
I,
don't
think
that
that
size
issue
is
a
big
deal.
I
think
it's
more
useful
to
just
make
whatever
field
we
pick
aligned
with
whatever
light
length.
We
want
it
to
be
and
pick
one
so
I'd
like
to
hear
from
other
people
on
the
list.
I
I've
heard
from
a
few
people
on
the
list
about
most
of
these
things,
I'd
like
to
hear
from
new
people
on
the
list
about
all
these
other
proposal
issues
right
now
in
the
last
slide,
the
to-do
items
I
know
that
I
have
to
put
in
more
detail
from
RFC
73
23,
because
the
way
UDP
uses
time
and
the
way
TCP
uses
time
are
not
exactly
the
same,
and
we
want
to
have
it
be
clear.
What
it
is.
I
The
question
is:
is
requests
defined
as
reply
equals
0,
so
if
I
send
the
fee
and
if
I
send
that
that
option
and
there's
a
request
and
a
reply
in
the
same
option
and
reply
is
zero,
does
that
mean
a
request
or
do
we
need
to
send?
Do
we
need
to
have
two
different
options
for
this
I'm
hoping
we
can
do
this
with
one
option
and
sort
of
flip
things
around
like
that?
I
Should
we
sacrifice
a
bit
in
the
header
in
this
time?
That's
you
know
in
the
the
high
order
bit
and
have
that
mean
please
reply
so
that
both
sides
can
be
nonzero
or
can
we
set
our
TS
value
to
zero
to
say,
don't
respond,
I,
don't
know
the
details
of
this
I
want
to
work
that
out.
I
think
Corey's
more
involved
in
this
than
anybody
else.
So
we
want
to
get
the
details
to
this
right.
I
I
B
Story
at
the
mic:
we
need
a
way
of
sending
a
segment
through
the
network
as
a
probe
for
the
PLP
MT
UD
draft.
It's
got
a
D
at
the
front,
so
so
this
is.
This
draft
needs
a
way
of
getting
a
response.
I,
don't
know
that
we
want
to
time
stuck
there
doing
this.
We
want
a
a
request
response
way
of
doing
this
and
and
that's
kind
of
what
we're
using
quick.
So
this
is
kind
of
what
we're
wanting
from
from.
I
B
B
I
B
N
Okay
Joe,
so
the
the
the
problem
with
with
time
stamps
is
that
they
can
be
coalesced
by
the
host,
and
so
you
can
reduce
the
time
stamp
space
time
to
your
most
recent
time
stamp.
And
that's
where
you
get
echoed
with
the
probe
mechanism.
We've
suggested
you're,
always
getting
response
that
is
tied
to
a
request,
and
then
we
can.
We
can
use
a
more
dynamic
token
there's
something
that
has
to
monotonically
increase
for
this
space.
N
I
B
M
I
I
And
again
the
question
of
the
OCS
checksum
sighs
whether
it
should
be
one
byte
or
two
really.
What
we're
checking
for
is
not
really
in
transit
errors.
Frankly,
we're
really
checking
for
whether
somebody
is
using
this
space
for
something
else
and
yeah.
That's
why
I
don't
personally
think
it
matters
that
much.
E
I
A
B
It's
going
here,
I'm
going
to
put
my
aha
moment
and
suggest
that
we
talk
offline.
Maybe
we
can
go
through
some
of
these
issues
with
us
and
anyone
else
who's
interested
and
it's
a
little
after
the
end
of
the
session.
Close
there's
not
that
many
people
here
in
the
working
group
at
the
moment.
So
if
you,
if
you
want
to
make
some
final
comments,
that
will
be
fine.
If
anybody
else
has
anything
burning
that
they
want
to
Joel,
that
would
be
okay,
but
the
audio
is
not
working
brilliantly
either
at
long
yeah.
D
B
Apologies
for
the
various
hiccups
have
happened
with
getting
things
working
this
meeting,
and
but
we
at
the
front
have
enjoyed
it
and
hope
you
also
have
had
a
profitable
IETF
here.
Thank
you,
Joe
for
your
remote
presentation.
Thank
you.
Everybody
here,
please
use
the
list.
We
have
lots
of
things
to
talk
about
see.
People
in
Bangkok.