►
From YouTube: IETF100-TSVWG-20171117-0930
Description
TSVWG meeting session at IETF100
2017/11/17 0930
https://datatracker.ietf.org/meeting/100/proceedings/
A
C
Do
we
have
a
note-taker
okay,
all
right
before
we
get
started
here,
we
need
a
note-taker
I
don't
morning
before
we
get
started
here
we
need
a
note-taker.
No,
the
speakers
work
really
well.
I
don't
have
to
yell
at
the
mic
to
eat
the
mic
episodes
this
week
so
I.
We
need
to
my
take
notes.
We're
just
looking
for
brief
notes.
Don't
need
the
blow-by-blow
person,
they
said
X
person
B,
said
Y,
just
a
quick
summary
of
what's
going
on
make
decisions.
Thank
you
very
much
story.
B
C
C
We
get
to
figure
out
what
dscp
we
want
to
use
for
the
lower
effort,
PHP
for
diffserv
effect
frame
update,
and
then
we
have
two
individual
submissions,
one
on
P,
PMPM,
tu,
o
detection
for
Datagram
transports
and
and
in-band
q,
LS
signaling
draft
and,
let's
see
we're
gonna,
do
something
with
a
priority
scheduler
draft
this
morning
and
that
that
that's
that's
getting
done
right
now,
almost
meet
Lee
I.
Forget
done
here.
We're
bashing
agenda
to
do
that
fairly
quickly.
B
C
Just
one
slide,
so
your
your
Secretariat
and
your
workgroup
chairs
have
been
hard
work.
This
week
we've
been
earning
our
exorbitant
salaries
and
here's
results.
Two
new
rfcs
are
out
this
week.
Two
of
the
SCTP
Doc's
are
now
published
as
RFC
a
two
sixty
eighty
eighty,
two
sixty
one
recent
experimentation
after
fair
number
go
rounds.
C
Oh
eight
version
of
this
had
has
been
posted.
Please
review
changes
by
what
middle
next
week
is
that
that
frankerz
November
yep,
please
review
chain.
Please
review
things
by
the
end
end
of
next
week.
The
section
on
considerations
for
networks
and
racially
stand.
Experimentation
is
one
that's
gotten
the
most
attention
so
that
that
could
use
a
careful
look
at
the
text
on.
B
Since
I'm
document
Shepard
for
that
just
to
be
clear
that
this
this
is
passed,
IES
G
last
call,
it
includes
all
the
text
we
expect
to
go
in
the
finished
RFC.
So
we're
not
really
looking
for
review
here,
we're
looking
for
wow.
This
is
right
or
wow.
This
is
wrong
so
but
check
it.
Please,
because
there's
been
many
changes
since
it's
in
the
working
group,
it's
a
very
short
timeframe,
because
we
want
to
say
to
the
RFC
editor
as
soon
as
we
can.
B
B
C
D
Hit
do
it
I'm
in
the
box,
okay,
so
so,
at
any
rate,
she
arm
has
boasted
an
internet
draft
describing
this
thing
and
it
contains
some
code
that
would
describe
at
least
a
conceptual
implementation
of
it.
The
idea
is
that
she
wants
to
make
the
management
of
elastic
traffic
a
little
bit
more
predictable
than
say
AF
or
simple
best-effort
schedulers
might
I'd
it
might
do
she
uses
the
word
AF
to
mean
elastic
or
there
would
the
way
we
would
describe
elastic
traffic.
D
So
there
is
some
room
for
that
in
reading
her
draft,
but
so
this
is
a
picture
of
how
the
scheduler
would
work
as
traffic
comes
out
of
the
queue,
then
that
changes
a
credit
scheme
and
now
on
the
input
side
to
the
cue
traffic
would
be
considered
high
priority
or
low
priority,
depending
on
the
state
of
the
credit
scheme
and
the
idea
being
to
keep
keep
traffic
fairly
close
to
a
certain
boundary.
So
next
slide
the
other
slide
yeah
right.
D
So
in
simulation,
the
thing
on
left
is
how
a
standard
AF
scheduler
would
behave
given
increasing
amount
of
traffic
and
a
breakout
of
I.
Think
she's
got
three-quarters
of
the
traffic
supposed
to
be
a
f4
best
effort
and
a
quarter
of
it
being
expedited
forwarding
and
then
on
the
right-hand
side
with
her
favorite
scheduler.
D
You
can
see
that
it
is
far
closer
to
the
exact
breakout
up
to
a
certain
point
and
then
it
spreads,
as
you
would
kind
of
expect
it
to
do
with
the
randomness
of
the
arrival
of
traffic
so
she's
that
this
is
her
PhD
thesis,
I,
guess
and
she's.
Looking
for
commentary
on
that,
I
think
that
would
pretty
much
need
to
happen
on
the
list
because
she's
not
here
so.
C
And
either
actually
start
here
in
the
room,
but
also
on
the
list
chairs
are
looking
for
some
input
on
to
what
is
appropriate
to
do
with
this
draft.
First
thing
is
figuring
out
whether
people
are
interested.
So
please
read
this
and
comment
on
the
list
and
after
that,
we'll
need
to
think
about
what
we
would
want
to
do
with
this
draft.
If
people
are
interested,
I
mean
off
top
of
my
head.
It
looks
like,
at
the
very
least,
some
useful
information
for
for
for
implementers.
D
So
one
of
the
things
that
I
think
about
is
okay:
how
do
we
expect
this
to
wander
out
in
the
industry
and
actually
be
used
from
a
vendor
perspective?
Vendors
are
going
to
put
it
into
their
chips
or
whatever.
If
that
you
know,
vendors
are
point
operator
and
send
money,
they
do
things,
and
so
so,
if
you,
if
you
really
like
this
idea
and
think
that
people
would
spend
money
to
do
it
and
we
can
expect
to
see
deployment
but
I
I
would
expect
that
to
come
back
from
operational.
D
B
E
E
E
The
main
idea
of
the
draft
is
to
describe
some
problem
that
was
found
for
RC
69
51,
which
actually
describes
installation
of
CTP
packets
and,
as
far
as
I
remember,
the
problem
is
that
that
RFC
requires
verification
requires
to
validate
verification
tag
and
for
some
a
ctp
chance
verification
tech
is
zero
or
not
available.
So
this
draft
basically
describes
the
problem
and
once
it
will
be
described,
the
draft
will
die
because
then
we'll
be
fc-69
51.
This
created.
C
B
Okay,
that's
what
we
give
with
mycotoxin,
yeah
any
ideas
about
time
skills;
no,
because
you're
not
like
a
tuckson
okay.
My.
C
E
E
Then
the
next
one
is
more
interesting.
Okay,
that's
about
ripsi,
4960,
rotten
issues.
You
know
that
RFC
4960
scribes
protocol
and
our
work
basically
is
to
collect
all
the
issues
we
have
with
that
RFC
and
describe
them
one
by
one,
so
that
I
mean
that
that
will
be
informational,
RFC
and
we
will
use
it
to
provide
RFC
4960
B's
the
error
right
before
this
idea
of
meeting
we
published
revision,
all
free,
which
includes
remaining
issues
from
our
list.
E
E
E
Kappa
block,
and
that
might
be
a
reason
for
some
problems
from
the
center
side,
so
we
clearly
stated
that
there
should
be
that
gap.
Our
blocks
should
be
is
elated
like
it
was
in
the
GCP
case,
and
then
we
also
updated
the
reference
to
RFC
29
21
19
by
the
other
RFC
reference.
I
don't
remember
exactly
yeah
I
mean
that
is
basically
our
work.
We
finished
updates
in
the
document
we
described
all
the
issues
we
wanted
to
describe.
G
E
B
Okay,
similar
question
and
we
need
some
reviewers
for
this
document,
which
was
my
reason
for
asking
the
first
question:
we
have
already
one
volunteer
for
from
kitchen
from
oracle
for
their
stack
justice,
larry
stack,
so
that's
cool
and
we
don't
have
a
volunteer
currently
to
read
linux
and
check
how
that
works.
That
would
be
fun
if
someone's
willing
to
go.
I
look
at
the
linux
implementation.
These
people
typically
don't
come
to
this
meeting,
so
that'd
be
really
useful
and
we
just
need
other
people
to
sign
up
to
review
this.
B
B
We're
going
to
ask
for
reviewers
on
the
list
and
if
we
can
get
people
to
sign
up
to
review,
we
will
start
working
group
last
call
okay,
good,
so
one
heads
up
this
is
heading
for
working
group.
Last
call
if
you
would
be
willing
to
review
it.
Please
tell
us
on
the
list.
So
we
know
that
is
it
going
to
be
enough?
Feedbacking
working
group
must
call.
This
is
relatively
short
document.
If
you're
familiar
with
SCTP,
it
will
be
an
easy
read.
It's.
E
G
Morning,
I'm
gonna
l4s,
when
the
buff
was
set
up,
was
decided.
It
wouldn't
be
a
separate
working
group,
so
I'm
gonna
treat
these
sessions
as
a
bit
of
a
sort
of
cross
working
group
report
on
what's
going
on,
as
well
as
telling
you
what's
going
on
with
these
drafts,
so
I'll.
First
of
all,
the
status
update
of
everything
and
then
talk
about
these
drafts.
I've
stuffed
a
couple
of
recap
slides
in,
but
you
can
skip
over
these.
G
These
are
just
for
people
who
don't
only
come
the
status
of
these
two
drafts
and
there
are
three
in
TS
vwg,
but
the
architecture
and
the
identifiers
are
both
really
in
a
holding
pattern.
As
I
said
last
time,
waiting
for
implementations
and
there's
some
good
news
on
that
I
think
on
the
next
or
the
neck.
The
following
slide
hits
this
one.
So
status
updates
source
code
is
available.
G
G
We
have
one
in
silicon
being
started
or
been
started,
I
can't
say
when
availabilities
can
be,
but
it's
aimed
at
the
data
center
environment.
So
it's
the
the
case
where
you've
got
existing,
what
you
might
call
classic
TCP
traffic
and
you
want
to
move
up
to
data
center
type,
CT
datacenter
TCP
in
the
same
data
center
like
cloud.
G
That's
the
end
point
and
the
your
cute
couple,
dake
in
ns3,
written
by
Tom
Henderson,
who
was
one
of
the
original
authors
of
ns3.
So
that's
you
can
be
useful
for
people
to
do
simulations
and
that's
been
finished,
but
will
be
released,
early,
2018,
being
tested,
yeah
right,
so
I
thought
I'd
talk
about
this
screen
implementation,
just
some
early
results.
This
was
actually
completed
on
the
plane
here,
buying,
ammo
and
so
we'll
see
more
results
later.
This
is
with
Caudill
not
with
the
dual
queue
with
one
screen
flow
running
through
it.
G
One
of
these
new
screen
flows
that
can
do
well
for
us,
scaleable,
congestion,
control
and
the
thing
to
look
at
here
is
the
queuing
delay
it's
in
the
sort
of
20
30
millisecond
range.
This
is
a
slow,
adapting
real-time
media
on
the
left.
You've
got
the
queuing
delay
network
to
network
network
stack
to
network
stack
and
on
the
right.
You've
got
Google,
including
no
sorry,
not
including
the
additional
queuing
delay
in
the
application
in
the
RTP
sends
queue.
G
You
can
ignore
the
bottom
for
the
sake
of
this
presentation,
that's
the
condition
window
bitrate
and
things
like
that.
So,
if
you
flick
now
to
the
next
slide,
you
can
see
what
the
queuing
delay
is
with
l4s
in
the
congestion
control
in
the
screen,
congestion,
congestion
control
and
in
the
network,
and
you
can
see
if
you
flick,
paddles
or
forwards.
You've
got
significant
difference
in
the
network.
G
Your
the
answer,
not
quite
a
millisecond
which
we'd
like
on
which
we're
getting
with
tcp,
but
this
is
the
first
run
with
the
new
implementation.
So
hopefully
we
can
get
it
down
a
bit,
but
it's
certainly
in
the
sort
of
under
ten
in
the
two
three
four
milli
second
range
with
one
spike
and
that's
what
percentiles
are
for
right,
so
move
on,
but
that's
good
news
time
to
see
implementations
coming
out
regarding
documents.
G
This
is
the
state
of
all
the
l4s
drafts
talked
about
the
first
two,
which
are
in
this
working
group
that
your
cue
I'm
going
to
talk
about
next
beyond
experimentation.
Davis
talked
about
scalable
tcp
Algren.
We
have
these
tcp
gone
to
RFC.
We
can't
use
that
directly.
We
can
use
it
for
trials
in
on
the
public
internet
and
index
n,
so
we
can
use
it
directly.
Accurate
sen
is
coming
up,
probably
for
working
great
last
call
in
the
published.
G
G
Yeah
I
could
have
just
said.
Generalized
ec
n,
which
is
ACN
plus
plus,
has
been
updated.
Little
may
need
to
talk
about
experiments
that
have
been
done
on
traversal,
which
have
found
problems,
and
that
was
reported
in
TCP
M
problems
on
mobile
networks,
but
only
not
problems
in
terms
of
the
anything
drastic
just
bleaching,
which
is
benign
but
irritating
so
easy
and
supporting
trail
is
still
waiting
for
the
chair
to
do.
The
working
group
last
call
right
up.
G
It
passed
one
great
last
call
and
design
team
form
for
he's
healing,
quick
yesterday,
in
fact
so
everything's
adopted,
except
for
that
last
one
on
in
quick
and
I'm.
Quite
a
lot
is
getting
through
fairly
quickly,
so
wait
we're
moving
nicely.
3Gpp
Ingemar
again
tried
to
put
in
a
proposal
for
putting
ecn
into
the
radio
link
control
layer
that
was
rejected
for
release
15
going
to
retry
for
release.
16
next.
C
Bob
quick,
quick
question
on
3gpp
and
with
luck:
I'm,
not
sideswiping
people.
What
I'm
about
to
say
my
recollection
is
that
the
ingham
are
trying
to
two
things:
a
3gpp
one
was
to
get
this
in
the
other
who's.
The
predecessor
was
to
remove
some
problems
and
there's
Ingemar
I
think
explain
what
what
exactly
what
he
was
doing.
I
C
I
I
G
G
G
Right
so
in
the
draft
now
we
have
a
picture
much
like
this
has
been
added
and
talked
about
the
structure
which
was
sort
of
missing
from
the
front
from
it
before
so
two
people
that
have
been
trying
to
implement
it
had
read
it
and
found
it
difficult
to
understand.
Because
of
that,
so
we
now
have
an
ASCII
art
looking
like
that
in
the
document,
although
it
doesn't
look
quite
as
good
as
that
and.
G
One
apparently
minor
change
we've
made
to
the
structure
is
that
little
factor
K
there
that
you
see
in
K,
P
Prime,
that's
the
coupling
factor
across
the
the
middle
there
we've
shifted
it
used
to
be
P
squared
at
the
bottom,
was
divided
by
it.
Now
that
might
sound
trivial,
but
it's
it
makes
all
the
parameters
input.
Parameters
for
the
I
q--
am
independent
of
each
other
so
that
you
can
change
one
without
have
to
change
others,
and
so
we've
had
to
go
away
from
the
original
command
line,
API
of
P
ie
to
do
that.
G
G
So
you
can
see
the
two
there,
the
L
for
s1
and
the
base
one
and
the
base
one
is
used
for
both
the
classic
and
the
coupling
across
to
the
Alpha
s1
and
in
the
appendix
you've
got
a
couple
of
examples
and
what,
in
fact,
one
of
the
implementations,
the
one
in
the
high
performance
which
is
is
Kirby,
read
the
PI
squared
I
would
recommend
it's
a
better
one,
but
that
proved
or
Kobe
read,
proved
easier
for
that
particular
implementation.
And
so
that's
nice.
G
We've
got
implementations
of
two
different
ATMs
now
and
you
can
also
put
in
different
ATMs
in
the
our
first
one.
We've
currently
got
a
step,
but
we're
experimenting
with
a
ramp
and
the
draft
is
structured
in
a
similar.
It's
the
musts
and
shirts
are
all
about
the
framework
not
about
the
ATM's
themselves.
Next,
so
the
main
thing
we've
added
to
the
draft
of
them.
G
The
structure
is
the
management
requirements
and
splitting
in
that
same
structure
the
configuration
of
the
framework
and
of
each
aqm
and
what
configuration
parameters
you're
going
to
need
in
general
for
any
AKM,
as
well
as
the
specifics
in
the
appendix
and
then
on
the
monitoring
side
of
Management.
Some
advice
on
what
you
ought
to
measure
and
this
actually
reflects,
as
you
see
in
the
gray
box
at
the
bottom,
the
Linux
implementation
which
we've
restructured.
So
it's
classful.
G
So
the
two
cues
inherit
from
the
cue
disk,
and
so
you
can
get
the
monitoring
stats
from
from
each
cue
inside.
The
key
list
like
you
would
get
the
monitoring
stats
of
a
single
cue
and
also
use
the
in
the
limits.
Implementation
use,
the
Linux
classifier
architecture
so
that
you
can
add
additional
classifiers
and
we've
been
talking
with
David
and
Gauri
about
how
that
may
be
disturbed
as
well
as
address
type
classifiers
as
well.
B
C
G
C
G
The
draft,
oh
and
in
the
experimental
part
of
the
draft
but
I
sort
of
already
preempted
this
without
realizing
that
this
is
what
they
were
going
to
ask
by
putting
the
management
requirements
in
as
if
it
was
a,
not
an
experiment,
but
just
that's
what
I
do,
because
you
put
something
in
the
network.
You
have
to
have
management
requirements,
but
I
hadn't
thought
about
it,
particularly
from
an
experimental
point
of
view.
So
we'll
do
that
next
time
as
well.
G
You
know
and
I
should
add
that
some
of
the
more
recent
changes
to
restructuring
in
the
limits
implementation
aren't
yet
available
in
the
open
sourced
one.
So
we've
got
to
do
we're
going
to
release
them
already
mention
the
relationship
that
they
serve,
and
we
will
continue
to
mention
that
and
we're
also
going
to
be
adding
queue,
protection,
policing,
discussion
at
some
point.
C
C
So
this
is
David
buck
speaking
from
floor
and
when
I
tried
to
is
sort
of
give
a
preview
of
what's
coming
here,
because
we
kind
of
some
invented
this
on
the
fly
this
week,
that's
where
I
eat
it
means
you're
good
for
coming
up
with
clever
new
ideas.
With
details
to
be
worked
out,
the
high-level
goal
is
what's
up
on
the
screen.
C
The
the
goal
of
what
we
talked
about
this
week
and
with
luck
will
show
up
in
some
draft
updates,
is
to
enable
multiple
instances
of
this
structure
up
there,
so
that
the
IPE
Sheehan
classifier
could
also
look
at
dscp
and
then
send
you
send
traffic
into
one
of
maybe
two
or
three
sets
of
dual
cues
and
maybe
there's
some
class
traffic
classes,
for
which
simply
the
l4s
isn't
turned
on
there's
just
in
the
just
one.
Cue
details
TBD,
as
Bob
I,
think
pointed
out
in
a
previous
slide.
C
Or
whatever,
and
so
just
a
heads
up
and
I
will
admit
that
I'm
I
have
a
bit
of
enlightened
self-interest
here
and
that
after
all,
the
work
I've
done
to
free
up
ect
one
I'd
like
to
make
it
usable
a
little
bit
longer
term
for
things
beyond
it
all
for
us.
As
for
again,
we
may
learn
a
few
things
here
and
we
might
want
to
have
no
first
bit
at
some
point
and
it
might
be
good
than
the
other
coexist,
but
but.
G
I
mean
what
I
would
emphasize,
though,
is
the
sort
of
point
of
l4s
was
that,
ultimately,
all
your
traffic
for
at
least
for
the
public
internet
and
for
a
lot
of
cases,
could
just
be
in
the
l4s
queue
and
and
the
classic
queue
is
just
for
old,
stuff
yeah.
Well,
eventually,
you
know
after
decades
disappear
and
then
you've
got
a
low
latency
service
for
everything,
and
it's
all
simple,
you
don't
have
all
the
complexity
of
managing
diffserv
and
everything.
G
J
G
J
C
B
L
Okay,
next
slide
so
yeah.
That's
that's
the
draft
about
the
the
lower
effort,
PHP
and
basically,
what's
left,
as
Corey
already
mentioned,
is
to
fix
the
DSP
choice,
and
one
important
criteria
is
that
other
PHP
should
not
be
marked
to
Ellie.
In
case
the
upper
bits
are
cleared,
so
the
IP
precedence
bits
are
set
to
zero.
Then
it
should
not
actually
be
mapped
to
the
Ellie
called
points.
L
So
the
proposal
was,
after
after
the
measurements
from
gory
to
basically
use
a,
not
dscp
which
are
currently
available
for
a
standard
section.
So
the
idea
was
to
open
pool
3/4
of
the
dscp
code
points
for
standard
section
and
as
I.
If
I
understand
David
correctly,
that
would
require
a
short
process
document.
I
will.
B
Draft
a
short
process
document
and
the
reason
for
having
a
short
hour
RFC
to
be
about
this
is
not
really.
This
draft
is
to
alert
everybody
else
in
the
ITF
that
pool
threes
changed
its
status
and
that
draft
will
simply
change
the
status
of
peel
3.
It
will
dictate
the
Ayana
rules
for
allocating
in
pool
3
and
then
and
the
put
in
the
meantime
your
draft
can.
The
working
group
must
called
and
we
can
allocate
from
that
yeah
right
and
as.
C
F
K
L
So
so
the
draft
needs
at
least
one
more
revision
actually
actually
to
update
to
the
new
dscp.
Our
suggestion
to
remove
this
element
restrict
discussion.
This
is
unnecessary.
I
think,
and
one
thing
that
is
left
to
or
for
discussion
is
I.
Think
I
need
to
have
a
section
on
the
update
of
their
RC
to
be
of
802
dot,
11
right.
So
this
is.
B
Lost
deputy
question
since
here
since
he's
here,
are
we
going
to
amend
the
Wi-Fi
mappings
now
we've
included
this
I
think
we
talked
about.
This
is
a
possibility
when
we
work
in
group
lost
called
the
I,
Triple,
E
and
related
document,
and
now
is
the
chance
to
think
about
whether
that's
a
good
idea
or
not.
What
do
you
think
David
I.
C
Much
I'm
look
I'm
of
two
minds
about
this.
In
essence,
I
think
this
situation
comes
down
means
classic.
Ietf
mantra
is
rough
consensus
and
running
code.
What
you're
looking
at
the
slide
here
is
some
brand
new
bits
not
yet
dry
rough
consensus,
whereas
the
I
bleed,
observe
draft
is
heavily
grounded
in
running
code.
So
off
the
top
of
my
head
since
I've
just
been
asked,
I
have
to
make
up
the
answer
on
the
fly.
B
That
I
think
that's
the
correct
position,
so
what
we
will
be
thinking
about
is
how
the
other
document
we
published
as
is,
and
we're
talking
here
about
what
we
should
save
first
of
all,
about
the
new
DCP
and
then
whether
cs1
should
be
changed,
which
I
suspect
we
don't
change
because
it's
widely
deployed.
So
this
is
the
thing
to
be
debated
here
right
in.
C
C
M
B
N
Brian
something
else
yeah
are
you
going
to
say
anymore
about
the
pool
tree
document
that
doesn't
exist
yet
I'll
be
nice
yeah,
so
did
I
volunteer
to
write
it
nice
and
comment
on
what
what
will
be
coming?
I'll,
just
yeah,
don't
burn
carbons
with
microphone.
I
just
want
to
comment
on
that
document.
I.
You
know,
I
think
this
is
a
perfectly
legitimate
thing
to
do
as
long
as
it's
properly
done,
a
very
little
sympathy
with
a
domain
isn't
checking
dscp
use
at
its
domain
border
because
you
know
they're
the
ones
who'll
be
inconvenienced
by
this.
N
If
anybody
is
but
they're
already
in
breach
of
the
architecture,
so
my
sympathies
are
limited.
I
just
wanted
to
point
this
out
what
it
says
in
RFC
247
for
about
two
or
three
is
a
pool
of
16
code
points
pool
three
which
are
initially
available
for
experimental
local
use,
but
which
should
be
preferentially
used
for
standardized
assignments
if
pool
one
is
ever
exhausted.
N
Yes,
now
you're
changing
that
yeah,
because
the
pool
is
not
exhausted,
so
you
definitely
have
to
do
a
formal
standards
track
update
to
474
to
override
that,
if
statement
otherwise
I
think
it's
pretty
straightforward,
assuming
you
will
leave
it
as
a
standards.
Action
pool
just
like
the
pool
one
at
a
standards,
action
pool
I
think
that
I
think
that
will
be
fine
and
I
as
between
one
and
five
I.
B
Don't
care
okay,
Brian
and
I
have
add
additional
question
for
you
now
you're
up
the
mic.
What
you
said
makes
perfect
sense
to
me.
This
must
be
a
PS,
that's
the
obelisk,
because
it
really
updates
things
whether
it's
exhausting
not
will
have
to
write
tests
to
say
it's
exhausted
in
the
manner
we
wish
to
use
it
in.
Therefore,.
B
Doing
it
it's
just
wordsmith,
it
is.
Thank
you,
that's
what
we
seen
spending
a
lot
of
our
time.
Doing.
There's
another
question
which
I've
already
kind
of
hinted
at,
which
is
pool
threes,
got
many
core
points
in
we.
They
only
need
one.
It
looks
like
one
on
five.
Both
have
similar
properties,
so
it'd
be
really
nice
to
have
standards.
Action
for
both
do.
N
It's
a
keep
it
simple
stupid
case,
I'd
be
tempted
to
just
grab
the
whole
pool,
because
you
know
all
all
diffserv
code
points
are
recommended.
None
of
them
are
mandatory
yeah,
but
because
of
the
remarking
inter
domain
boundaries.
So
you
aren't
really
breaking
somebody's
code
by
doing
this,
unless
they're
not
remarking
at
the
boundary,
in
which
case
they're
already
outside
spec.
C
I
would
I
I
would
agree
also,
particularly
with
the
rationale
says,
anybody
who's
already
using
any
pool
3
code
points
inside
their
network
clearly
is
doing
diffserv
classification
and
marking
at
the
domain
boundaries,
otherwise
they're
doing
something
really
stupid.
We're
not
gonna
work,
we're
not
gonna,
save
them
from
themselves
and
I
think
yet
grab
grabbing
all
of
pool
through
pool
three
is
needed.
We
definitely
need
to
need
to
take
both
code
points.
C
B
O
Okay,
thank
you.
Yes,
yes,
I
want
to
do
a
quick
update
on
the
first
document
and
spend
one
time
the
second
one,
so
the
first
documented
bat
next
ten
she
on
a
fake
frame
or
next
it's
right
next
slide.
Please
thank
you.
So
this
is
published.
Ifc
6363
goal
is
to
add
the
capabilities
of
this
fake
frame
to
also
consider
another
type
of
effect
schemes
for
doing
real-time
flow
protection.
O
Unfortunately,
I
didn't
have
time
to
do
that
for
this
I
checked
meeting,
but
already
almost
well
documenting.
In
our
point
of
view
myself
and
I
began,
the
Cova
is
almost
ready
for
what
in
the
class
school.
So
my
our
plan
is
to
do
a
quick
update
in
December.
Maybe
and
then,
if
the
group
is
ready,
then
we
can
continue
the
process
with
this.
So
let's
go.
C
O
The
next
one
will
take
a
little
bit
more
time,
but
I
guess
that
for
next
IDF,
as
all
will
say,
I
will
explain.
It
will
be
close
to
the
end
as
well.
Okay.
So
this
is
the
second
document.
This
document
is
about
a
fake
skin
touch
to
say
the
well.
All
you
need
to
need
to
specify
to
be
able
to
use
a
particular
type
of
FEC
within
affect
frame
extended,
Ferrum
work,
so
we
made
three
main
changes
for
this
document.
O
The
first
one
is
the
addition
of
new
kotha,
in
fact,
they'll
Cassini's
working
with
me
on
the
topic
at
scenario,
so
it
was
naturally
potential.
Kotha.
Second
change
is
bad.
Well
how
we
manage
those
Mexicans
initially
until
the
previous
till
the
previous
version
of
this
document,
there
was
a
single
fix
key,
which
means
that
these
fix
schemes
was
for
both
the
case.
Well,
we
have
coefficients
on
fin
on
field
field,
to
continued
field
to
other
perform.
If
you
need
to
add
a
point,
eight
I
don't
go
in
details
but
simple
math.
O
So
well,
it
was
one
possibility,
but
we
came
up
with
this
idea.
That's
not
probably
the
best
solution,
because
if
you
want
to
implement
and
provide
a
compatible
fix
codec
for
this
effect
scheme,
you
have
to
support
all
the
all
the
three
variants
of
coefficients
coding
coefficients
which
makes
things
a
little
bit
more
complex
to
implement,
not
that
much
but
a
little
bit
more
complex.
In
fact,
you
won't
be
able
to
switch
from
one
type
to
another
type
within
the
fact
using
the
same
fact
session.
O
It
makes
no
sense
because
it
will
absolutely
be
the
decoding
process
once
again.
I
don't
want
to
go
too
much
in
Jensen.
Is
there
are
questions
but
okay,
so
this
is.
It
was
an
incentive
to
in
fact
separate
and
have
different
effects
schemes,
one
for
each
type
of
mathematical
process.
So
there
is
one
for
unit
field,
one
for
finiti
field,
sorry
24,
eight,
and
if
we
want
we
can
add
more
for
finished
field
at
a
point
for
12
or
6
or
something
else.
So
it's
probably
the
best
approach.
We
believe
just
one
more
note.
O
In
fact,
if
you
look
at
the
document,
you
will
see
that
ok,
we
specify
the
full
scheme
very
carefully
for
unit
field
of
24
8
and
for
finish
filled
to
a
poor.
It's
just
well,
it's
almost
empty
just
record
a
new
fake
and
cutting
idea,
a
new
identifier
to
say
that
okay,
this
fax
scheme
is
for
this
finish
field.
You
will
have
to
use
coefficients
about
this
minute
field
and
everything
else
is
empty.
It
refers
to
the
group
related
section
for
Neos
effects
scheme.
O
So
it's
pretty
simple
in
terms
of
writing
within
the
document
in
specifying
document
a
new
fix
key
next
slide,
so,
okay.
So
the
third
change
we
made
for
this
document
is
the
following:
we
added
an
additional
parameter,
the
ability
to
have
a
density
parameter.
So
it
means
that
initially,
we
add
equations
where
well
just
to
remind
you
of
the
process.
We
have
this
slide
in
encoding
window
and
each
time
we
want,
we
produce
a
new
whip,
a
symbol.
We
have
to
consider
all
the
symbols
within
this
sliding
and
conning
window
we
select
coefficients.
O
We
multiply
each
symbol
by
this
coefficient
and
we
saw
all
the
other
things,
so
it's
simple
math
the
equation
you
have
on
top
typically
is
the
the
one
we
were
supporting
in
previous
version.
All
the
equations
were
with
nonzero
coefficients.
So
each
time
you
had
to
build
a
new
repair
symbol,
you
had
to
sum
up
all
those
source
symbols
multiplying
them
by
a
certain
questions.
So
what
we
want
to
do
now
is
to
be
able
to
have
space
equations
space.
Equations
means
that
a
subset
of
those
coefficients
will
be
0,
so
we
ignore
the
related.
O
O
O
Entering
and
recurring
process
with
full
density
equations
is
a
bit
heavy.
So
it's
not
a
problem
when
we
are
dealing
with
very
small
equations,
let's
say
a
few
tens
of
symbols.
This
is
something
we
can
do
very
efficiently,
even
on
a
light
where
platforms
in
one
even
of
on
phone
I
provided
and
Kinnickinnic
speed
figures
last
time.
So
this
is
something
which
is
quite
easy,
but
as
the
equation
grows,
as
we
have
entering
window
of
a
few
hundreds
of
symbols
and
things
tend
to
be
quite
computing
intensive.
O
O
O
This
is
not
a
big
problem.
If
this
density
threshold,
this
parameter
is
zero,
then
you
have
the
lowest
density,
which
is
in
that
case
one
over
sixteen,
so
one
over
the
sixteen,
the
coefficients
will
be
Z
will
be
non
zero.
All
the
rest
will
be
zero
and
when
you
are
using
density
value,
fifteen
then
means
that
all
the
coefficients
will
be
nonzero.
So
it's
pretty
easy
to
address.
O
We
can
carry
that
in
each
packet
without
increasing
the
overhead
transmission
overhead,
and
this
is
also
also,
of
course,
an
additional
parameter
that
you
will
add
to
provide
to
the
generate
coding
coefficient
function,
so
it's
quite
easy
to
manage
next.
Thank
you
question.
Does
it
work?
Yes,
of
course
it's
pretty
efficient
terms
of
encoding
complexity.
Of
course,
if
you
reduce
by
half
the
density,
then
you
will
experience,
including
speed
will
multiply
almost
by
not
exactly
we
made
quick
experiments,
it's
not
exactly
multiplied
by
2.
O
In
that
case,
it
is
plus
26
percent
additional
betrayed,
but
ok,
it's
efficient
at
the
decoder.
It's
late
subviews.
We
experienced
some
benefits,
but
we
are
quite
disappointed
by
this
20%
benefits.
Well,
it's
really
implementation
dependent.
We
suspect
this,
our
decoder
not
to
be
optimized
from
this
point
of
view,
because
this
is
an
additional
feature
that
was
not
put
in
before.
So
we
still
have
some
work
to
do
on
this
aspect,
but,
okay,
we
are
confident
that
we
can
improve
this
a
little
bit
Nexus.
O
C
C
Okay,
so
it
looks
like
the
density
parameter
is:
is
the
numerator
of
it's
it's
x
over
16,
correct,
yeah?
Okay,
let
me
suggest
an
exponential
encoding
where,
in
essence,
that
tells
you
where
in
essence,
numerator
is
one
and
the
density
parameter,
tells
the
denominators
that
would
give
you
one
half
on
quarter.
1/8
1/16,
give
me
a
power
of
two
every
time:
okay,
okay
and
and
maybe
not
exactly
that,
but
that
that
kind
of
idea
we'll
get
you
some
more
dynamic
range
that
might
be
useful
in
the
future.
Okay,
thank
you.
No.
B
It's
an
the
TSP
WG
covers
quite
a
wide
range
of
topics,
and
this
is
kind
of
in
one
area
where
maybe
only
some
people
have
interests.
But
if
you
do
have
interests,
please
use
the
list.
Please
speak
to
Vincent
because
I'm
this
is
document.
We'd
love
working
group
input
to
help
Vincent
get
this
in
the
best
possible
format.
The
best
possible
way
and
progress
this
to
hopefully
a
conclusion
in
London.
B
B
We
are
trying
to
solve
a
problem
that
has
been
solved
before
and
TCP
has
ways
to
discover
the
path
MTU
and
basically
there
are
three
different
ways
in
which
TCP
typically
does
that,
and
it
sees
path
to
big
messages
from
Reuters
or
routers
that
happen
to
send
you
these
and,
and
they
happen
to
reach
you,
and
sometimes
that
happens,
and
that
tells
you
directly
what
the
link
hem
to
you
is
along
the
path.
We
call
that
classical
path,
MTU
discovery,
MSS
clamping
is
widely
used
for
TCP.
It's
a
nice
middle
box
technique.
B
If
you
have
nice
middle
box
techniques,
because
the
middle
box
fixes
the
fact
that
maybe
you
have
a
tunnel
or
an
obstruction
along
a
path
on
the
MTU
is
actually
smaller
than
you
thought
gives
you
direct
feedback
and,
and
finally,
these
work
quite
well,
but
I'm
not
quite
sure
how
the
com,
because
we
have
a
third
method
which
is
path,
layer
path,
MTU
discovery
for
eight-to-one,
and
this
uses
TCP
segments
to
probe
the
path
notified
by
these
other
mechanisms
and
therefore
to
find
out
what
actually
works.
The
aim
of
this
is
to
solve
two
problems.
B
One
is
when
your
network
finally
evolves
to
be
bigger
than
1500
bytes
of
MTU
space,
and
we
can
send
big
packets.
Well
we're
going
to
do
that
sometime
guys
and
the
second
one,
when
your
network
has
a
few
headers
that
you
didn't
expect
and
know.
The
MTU
size
is
smaller
than
1500,
which
is
often
the
case.
So
TCP
has
ways
to
do
this
next
bit,
yeah
no
real
way
of
doing
this
properly
for
UDP
and
because
the
first
one
works,
fine
path.
Mtu
discovery
works
for
UDP,
except
it's
not
as
good
as
for
TCP.
B
For
various
reasons,
mainly
because
it's
hard
to
verify,
but
the
path
to
big
messages
are
actually
forward
the
connection
in
UDP
land
and
that
you're
talking
about
because
UDP
doesn't
really
have
connections
next,
one
AHA.
There
actually
said
many
problems
here
and
many
things
that
have
to
be
different.
If
we
do
this
for
UDP
and
always
black
hole
problems,
and
you
just
read
the
draft
and
we'll
go
through
more
as
we
find
those
and
fix
them,
what's
a
good
path,
probe
message
for
UDP
for
TCP,
we
can
send
data
segments
if
they
don't
get
through.
B
Tcp
will
retransmit
them.
Hey,
that's
easy
and
many
UDP
protocols,
don't
really
have
retransmission
of
data
segments
or
not
normal
retransmission.
They
do
some
form
of
repair
or
something
quite
different.
So
the
suggestion
in
this
particular
internet
draft
is
we
send
padding
packets,
packets
that
have
no
value
in
terms
of
retransmission.
They
may
be.
Data
that
doesn't
have
to
get
through
in
every
control
panel
may
just
be
completely
no
packets
of
the
required
length.
B
What
and
path
MTU
size
should
we
choose?
There's
no
MSS
negotiation
in
a
UDP
session,
because
there's
no
collection
setup.
So
we
have
to
choose
a
starting
value
for
path.
M,
you
discovering
ipv6,
is
really
helpful
here
because
we
have
a
starting
value,
so
we
could
just
use
12,
18
and
hey.
We
know
what
should
work
and
we
can
try
it
and
that
works
as
a
nice
starting
value.
Other
things.
Well,
actually
this
fault
law
in
the
internet
about
what
might
work
with
ipv4
and
how
ipv4
might
give
you
blockages
at
different
sizes.
B
So
we
we
will
have
an
effective
starting
point
for
a
sensible
probe.
How
do
we
react
to
a
loss
probe
or
even
how
do
we
detect
a
loss
probe?
So
to
make
this
work?
We
have
to
have
a
UDP
protocol
for
which
we
can
solicit
an
acknowledgment
and
UDP
doesn't
directly
provide
this,
so
it
has
to
be
provided
by
something
on
top
of
UDP
and
finally,
an
tcp
when
it
sends
packets
through
the
internet,
keeps
track
of
them
and
retransmits
them
for
you.
B
Therefore,
you
know
when
your
current
path
MTU
no
longer
works
for
many
UDP
applications.
We
don't
have
a
way
of
knowing
that
the
current
one
didn't
work.
We
simply
send
the
data
to
slash
dev,
slash
null
somewhere
in
the
network
and
that's
kind
of
not
the
right
thing
to
do.
So.
That's
a
really
bad
black
holing
problem.
So
what
the
solution
to
that
would
be
simply
to
introduce
these
properties
periodically
when
you
need
them,
and
they
can
check
that
it's
currently
working.
B
So
maybe
we
have
some
challenges
here,
which
we
actually
have
potential
solutions
for
so
next
slide.
We
could
use
this
with
a
wide
variety
of
protocols.
If
you
use
path,
layer,
MTU
discovery,
the
path
layer
is
the
protocol
that
fits
on
top,
that
does
the
transport
functions.
So
TCP
is
a
path
layer,
but
saw
also
our
other
protocol
layers
on
directly
on
top
of
UDP
setp
over
UDP
SCTP
of
a
DTLS
I
put
the
SCTP
in
not
to
advocate
SCTP,
but
simply
because
we
do
have
running
code
currently
in
the
bsd
stack
for
FreeBSD.
B
That
does
this
with
SCTP
for
these
two
versions.
So
we're
tweak
tweaking
that
and
using
that
as
a
basis
for
development
UDP
options.
We
currently
have
code
running
in
freebsd
that
we
are
adding
this
to
to
with
UDP
options
and
I'd
say
that
these
are
reasonable
straw
months
for
a
sophisticated
protocol
like
SCTP
and
a
rather
stupid
protocol
like
the
UDP
options
and
therefore
anything
else
that
runs
over
UDP
could
also
be
added
to
this,
as
other
other
layers.
B
Other
definitions,
so
we
could
add
things
for
stun
if
anyone
willing
to
work
with,
doesn't
figure
out
exactly
how
to
do
this,
and
we
can
add
it
for
tunnel
protocols
if
there
was
some
we're
generating
the
air
core
and
other
things
you
require
so
hopefully
where
it's
going
to
be
more
generic
and
more
useful
than
simply
for
these
three
things.
Next,
how
does
it
work?
Well,
I
mean
if
you're
sold,
perhaps
on
hey,
we
need
this,
and
perhaps
there
is
a
potential
solution.
B
What
that
might
that
solution
look
like,
and
is
it
bizarrely
complicated
and
it's
currently
this
complicated
and
with
a
whole
list
of
hints
about
what
sizes
to
choose,
and
so,
let's
see
how
this
works.
Next,
we
start
with
no
knowledge
of
anything
and
no
intention
to
path
MTU
discovery.
Well,
that's
boring
next.
At
some
point,
we've
received
an
act
from
the
other
end,
so
we
have
the
notion
of
a
connection.
B
Remember
it's
UDP,
so
there's
no
real
connection,
just
the
path
layer
path,
MTU
discovery,
which
is
rather
long
for
an
acronym
and
algorithm,
has
decided
that
it
can't
connect
the
other
end,
and
so
it's
time
to
check.
So
we
start
probing.
We
probe
with
a
value
we've
guessed
which
works,
1284,
ipv6
next
slide,
and
then
we
keep
increasing
that
probe
because
I'm.
If
that
works
good,
we
can
use
the
the
one
that
we
guessed
would
work,
and
now
we
search
for
one
bigger.
B
We
keep
searching
for
one
bigger
according
to
a
list
of
predefined
suggestions
until
we
reach
the
MTU
of
the
sender
and
if
we
know
the
MTU
of
the
receiver,
we
use
these
both
as
a
max
to
cut
the
probing
and
we
probe
until
either
we
send
lots
of
probes
and
they
don't
get
through
lots
being
a
number
defined
in
the
algorithm.
We
receive
a
path
to
big
message
and
we
believe
it
we.
We
think
we
should
authenticate
that
in
some
way.
So
that's
part
of
this
algorithm.
B
How
do
you
figure
out
the
path
to
big
message
was
really
one
that
you
can
listen
to
and
some
of
them
you
can't,
because
simply
they
there
is
not
enough
quality
payload
to
know,
and
that
in
case,
that's
a
really
annoying
other
ones
you
can
validate,
and
the
third
thing
is,
but
we
reached
limit
anyway.
At
some
point
we
move
over
to
the
left
yeah.
B
B
B
So
that's
what
the
probe
are
a
bit
is
finding
a
lowest
one
that
works
and
then
rebooting
the
algorithm
to
work
with
that
and
then
last,
and
maybe
we
have
to
send
probes
all
the
time
while
we're
in
the
state
to
prevent
black
hauling,
which
was
the
scenario
where
all
your
packets
go
to
slash,
dev,
slash,
nil
and
that's
the
loop
at
the
bottom,
which
is
it
really
still
working
with
the
current
effective
NTU
that
you're
trying.
So
that's
the
algorithm.
B
It's
quite
a
few
changes
to
the
way
in
which
tcp
does
it
and
one
of
the
things
that's
subtle
and
the
reason
for
is
it
really
working
is
because
applications
using
UDP
choose
their
own
message
size.
So
all
the
packet
size
actually
generated
by
the
apps
isn't
necessarily
the
effective
MTU
chosen
by
this
algorithm,
and
there
needs
to
be
a
little
into
play
here,
just
to
make
sure
that
it
all
works
and
you're,
not
black,
hauling,
half
the
data
or
anything
crazy
like
this,
so
that
bottom
loop
may
be
useful
for
many
of
apps.
B
B
We
have
a
draft.
We
we
have
some
recommended
values
and
we
think
this
is
a
doable
project.
We
have
running
code
in
the
SCTP
land
and
soon
for
UDP
options
with
some
things
to
tweak.
So,
if
you're
interested,
please
read
the
draft
comment
on
what
we
have
or
even
what's
not
there.
We
have
to
decide
at
some
time
to
tell
the
upper
layer
and
that's
the
app
in
many
cases
or
whatever
is
sending
using
this.
B
What
effective
empty
you
to
use-
and
that's
not
always
obvious,
so
we're
we're
trying
to
figure
out
when
you
tell
it
this.
How
do
we
handle
inconsistent
results?
Is
a
nice
can
of
worms?
There
are
many
ways
in
which
this
can
go
wrong
and
if
we're
designing
something
to
be
robust,
it'd
be
really
good
to
be
robust.
How
do
we
handle
a
puff
to
big
message?
That's
bigger
than
the
link
MTU
or
larger
than
the
probe
size.
The
answer
is
we
try
and
ignore
it.
We
find
a
good
way
to
ignore
it.
B
How
do
we
handle
a
path?
That's
got
forwarding
inconsistency.
Where
sometimes
we
see
one
thing.
Sometimes
we
see
another
yeah
any
bright
people
in
the
room
want
to
help
me
out
with
us
this
one
I
don't
know,
but
I
want
a
way
of
doing
this,
dealing
with
middle
bots
that
change
packets
and
we're
not
going
to
go
and
blow
them
up
or
shoot
them
or
anything.
B
There's
some
secure
do
things
here.
I
think
that's
doable
as
well
and
Nestle
Oh
what
I'm
going
to
do
well.
Clearly,
this
is
a
work
in
progress
and
we
have
an
encoding
SCTP.
We
revised
the
draft
so
we're
going
to
revise
the
chord
and
check
whether
it
matches
the
draft
the
UDP
options
and
is
a
very
simple
change.
B
P
Michael
Lorenzen,
so
I
just
want
to
comment
on
the
usefulness.
I
can
tell
you,
I
helped
a
friend
of
mine
who
had
a
video
over
UDP
application
with
this
exact
problem
three
months
ago,
and
he
also
had
added
problems
that
he
had
to
handle
the
difference
in
the
resulting
packet
size
when
he
was
in
his
application,
was
sending
UDP
over
v4
and
over
v6.
P
So
he
had
to
do
you
know
I
had
to
change
when
he
was
actually,
you
know
handling
the
v6
things
that
he
had
to
actually
take
account
for
the
larger
header
and
all
of
that,
so
he
came
up
with
a
very
crude
mechanism.
It's
it's
less
advanced
than
this.
He
just
basically
had
three
different
values
in
1500.
In
it
he
like
tied
500
at
1012,
a
idea
and
full
slice
and
picked
whatever
came
out
of
it.
So
I
think
this
is
very
useful
work.
A
related
question.
P
Do
we
know
how
much
how
many
TCP
hosts
on
the
Internet
has
PLM
tud
turned
on
my
my
I've.
Never
seen
this.
Actually
it
being
used
widely
used
to
do
heaven
any
numbers
is
there
a
document
I
can
read.
Do
the
TCP
people
have
this
I,
don't
know,
is
this
something
we
should?
Is
there
someone
interested
in
finding
out
I,
don't
know
how
you
found
to
find
that
out.
I
think.
B
P
If
there
is
a
researcher
room
interested
in
figuring
this
out,
it
would
be
very
interesting
operational
later
to
know
if
TCP
speaking
host
like
web
service
and
so
on,
has
PLM
tud
turned
on,
because
otherwise
we
I
think
we
should,
by
now
ask
the
operating
system
vendors
to
turn
it
on
by
default.
Instead
of
having
this
as
a
knob
that
is
default
off.
E
E
B
So
and
so,
if
your
upper
layer
protocol
the
PL
in
this
case
just
something
like
SCTP,
then
that
protocol
will
take
care
of
this
yeah
and
it
will
have
a
repair
method
and
it
will
do
something
I.
This
is
a
method
that
mimics
something
at
the
network
layer,
which
is
where
path
MTU
discovery
belongs
so
I,
don't
believe
it
should
be
responsible
for
a
retransmission,
so
the
so.
H
B
B
O
O
Q
B
Know
the
reason
I
haven't
thought
about
Dan,
okay,
so
I
thought
about
DNS,
because
Allison
Mankins
said
Oh
DNS
needs
to
send
big
UDP
packets
and
we
don't
really
know
how
to
do
it
properly
and
have
I
thought
about
it.
No
because
I
don't
list
and
DNS.
So
if
somebody
understands
the
problem-
and
they
tell
me
it-
then
I
may
well
be
able
to
work
for
a
solution.
Yeah.
Q
I
mean
I
think
it's
sort
of
the
same
problem
is
that
it
wants
to
send
big
packets
and
they
get
fragmented
and
don't
get
there
and
stuff
like
that.
So
this
would
be
very
helpful,
but
they
also
don't
want
to
add
a
lot
of
overhead
why
they
don't
usually
run
TCP.
But
that's
that's
about
what
I
know.
Okay,.
P
B
That
there
are
three
places
where
you
can
do
this
algorithm.
The
first
place
is,
if
you
believe,
UDP
options
is
deployable,
which
we
think
it
is
that
you
can
put
it
in
the
stack
and
use
UDP
options
to
do
that.
Probing.
It's
the
probe
message,
which
is
the
problem.
Then
it
becomes
a
stack
thing
like
pathway
and
half
empty.
B
B
R
P
Each
packet
so
to
avoid
that
so
I
would
prefer
I,
don't
know
it's
they'll
mean
less
deployment
for
this
to
be
for
the
application
developer
itself,
because
then
also,
if
they
put
that
this
into
their
protocol
on
sending
their
own
application
in
all
frames.
This
means
that
you
have
the
same
five
tuple
and
there's
less
risk
of
that.
But
these
packets,
taking
a
different
being
hashed
to
different
paths,
so
did
the
whole
the
inconsistent
path.
We
were
talking
about
yeah,
so
so
the
I
would
so
I.
P
B
R
You
know
watching
this
and
saying
once
this
gets
deployed,
we've
been
saying
for
civically
that
you
know
it
kind
of
works.
You
know
and
it
kind
of
works,
so
it
kind
of
works,
and
one
of
these
days
we're
going
to
be
passing
around
keys
and
stuff
like
that
and
signaling
and
we're
just
gonna
be
dead.
You
know,
but
but
over
a
period
of
like
a
decade,
people
have
been
afraid.
You
know
it's
like
we're.
Gonna
have
to
do
something
about
this.
R
We
have
to
do
something
about
this
and
I'm,
not
quite
sure
why
I
mean
it's
like
it's.
Obviously
you
know
this
is
obviously
going
to
be
bad
I'm,
not
quite
sure
why
it
hasn't
been
worse,
but
just
to
be
talking
to
major
applications
that
people
that
are
working,
the
you
know,
UDP
based
application
space.
You
know
I
mean
around
here,
I.
Think
seven
DNS
would
be
too
interesting
once
so.
Both
you
know
claim
to
small
MTU
size,
okay
and.
B
C
C
C
B
F
B
H
This
draft
is
originally
always
originally
submit
to
six
men
because
it
covered
different
areas:
six
ipv6
and
also
working
for
transport,
so
they
think
that
I
have
to
get
feedback
from
John
Walker.
That's
why
I
came
here
very
likely.
This
draft
will
be
moved
to
either
transport
or
new
working
group.
So
next
page,
please!
So
basically
why
we
come
to
the
idea
idea
to
do
this.
We
want
to
do
something
for
Transport,
with
only
minor
quality
service,
that's
our
fundamental,
and
why
we
do
this
because
we
think
some
application
in
the
future.
H
That
second
thought,
which
best
only
as
our
first
service,
cannot
support
it
and
think
about
MP
TCP,
because
the
single
session
cannot
get
enough
bandwidth,
then
we
need
to
have
multiple
TCP
to
supports
bigger
bandwidth.
So
we
are
thinking
taking
another
approach
that
is
to
have
network
device
to
involve
our
more
then
also.
We
want
to
have
gran
narrative
or
quests,
which
is
fine,
then,
before
it
can
go
to
the
flow
level.
Each
TCP
flow
can
have
a
QoS
and
finally,
to
get
a
lesson
from
the
HP
technology.
H
We
want
our
protocol
very
simple,
so
we
have
a
design
targets.
First
of
all,
it's
a
user
or
application
can
directly
use
that
new
service
and
the
second
is:
the
new
service
must
coexist
with
the
current
transport
technology,
no
matter
if
it's
a
TCP
or
a
QM
or
easier,
they
have
to
be
backward
compatible.
So
in
that
application
can
set
up.
Adaptive
qsr
means
that
when
a
TCP
sessions
established
application
can
dynamically
changes
such
as
a
bandwidth
without
turn
down
the
TCP
session.
H
Also,
the
first
one
is
a
service
provider
can
manage
the
service
to
to
have
a
good
business
model,
and
the
fifth
one
is
that
we
want
the
performance
and
scalability
it's
for
the
new
service.
Our
protocol
for
vendor
to
achieve
so
last
one
is
for
the
network
neutrality.
We
want
this
service
to
be
transported,
agonistic,
UDP
or
TCP,
unicast
or
multicast
can
all
use
it
next
page.
Please.
H
So
here
is
our
basic
concept
for
doing
this,
so
we
want
to
introduce
a
new
transport
control
layer
in
the
IP,
which
can
do
our
three
major
functions.
First,
one
is
the
inbound
singling.
It
means
that
messaging
can
carry
in
TCP
packet,
but
it's
inside
IP
header.
That
means
the
even
TCP
is
encrypted,
which
they
can
use
it
so.
Secondly,
the
qsr
hardware
program
information
on
pass
IP
pass,
so
whatever
you
use
for
IP,
pass,
either
shortest
path
or
sorting
IP
segment
as
a
second
routing.
H
H
So
second
color
is
a
congestion
control,
because
we
have
network
device
involved
in
the
flow
level
control.
Then
we
can
detect
congestion
State
on
the
past
and
also
condition
state
can
return
to
the
source.
So
third
category
is
at
Passover
M.
We
want
the
past
property
be
detectable
and
including
the
static
properties
such
as
a
home
number
and
total
boundaries
also
including
dynamic,
such
as
a
remand
boundaries
or
the
queue
size.
H
Also,
we
can
want
to
the
diagnosis
functions,
of
course,
so
for
all
these
are
fundamental
technologies
to
use.
Thank
you
for
singling
processing
and
the
Q
s
14
and
the
Q
estate
hitting
so
basically
right
now,
the
current
technology
in
the
network
processing
is
very
powerful.
It
can
not
only
do
the
normal
forwarding
and
packet
processing.
H
It
can
also
do
some
generous
if
you
work,
so
the
the
key
part
is
that
we
store
the
Q
s
state
inside
and
here
not
the
controller
CPU,
which
can
dramatically
reduce
the
complexity
of
the
protocol
and
the
increase
the
scalability,
because
the
controller
CPU
is
involved
as
minimal
as
possible.
So
next
page,
so
here
is
a
how
it
works
in
ipv6.
So
in
ipv6
we
use
a
tool
extension
header
to
do
this
once
a
hop
our
hub
extension
header
to
carry
the
singling
information.
H
Second,
is
that
destination
extension
header
to
report
the
programming
state
to
sauce.
So
by
this
way,
we
can
cover
as
a
magic
IP
pass
to
do
the
u.s.
control,
so
they
for
the
top
I
have
extension
even
in
theory,
it
will
be
checked
by
each
heart,
but
we
can
configure
the
note,
which
requires
a
haba
haba
a
way
Aurora
loose
water
can
check
the
single
message
and
process
it.
So
this
the
selection
of
this
rod
is
really
depends
on
the
network.
The
traffic
engineering,
for
example.
H
H
This
is
a
kind
of
experimental
our
draft.
So
we
do
our
experiment
with
our
current
products,
which
is
a
console
access,
rod
which
is
low
and
middle
range
and
with
one
amp
you
with
traffic
management,
so
module
inside.
So
next
page.
This
is
a
diagram
of
our
packet.
Forwarding
fluid
so
fundamentally
is
a
liquor
in
line
we
call
the
VIP
peripheral
process
is
for
our
the
actual
module
we
added
for
this
purpose.
So
basically,
whenever
the
packet
come
in,
we
will
check
the
classification.
If
it's
in
the
flow
level
Q
s
table,
then
we
do.
H
The
us
process
then
fall
into
the
same
interface
as
the
IP
for
T.
If
it's
not,
then
they
go
to
the
normal
pass.
So
next
page
because
the
Q
s,
the
scheduling
and
the
queuing
is
done
by
Hardware
right
here,
I
just
assure
the
high
level
hierarchy
of
alchemy
and
schedule.
So
this
this
low
is
more
than
normal
reason,
as
which,
like
Burcham
trident
it
has
much
more
flexibility
and
power,
but
the
I
checked
other
renders
process
is
kind
of
similar
and
so
I
believe
other
project
and
also
too
similar
similar
things.
H
Basically,
we
have
a
couple
level
alpha
q
and
each
q
have
either
one
or
two
and
scheduler.
We
support
strict
priority
and
deficit,
wait,
run,
robbie
and
also
traffic
shaping
is
for
the
keyword
or
single
link
pocket
Corrigan
it,
which
is
our
old
technology
and
not
warily.
So
basically,
this
chip
is
supporting
64
gig
one
chip.
So
next
page-
and
here
is
the
result
about
picture
issues
that
we
we
have
VIP
follow.
First,
the
p1
and
the
b2.
Then
we
have
what
traditional
TC
becoming.
H
We
can
see
that
traditional
TCP
come
in
person,
the
impact,
the
boundaries
of
the
beauty,
CP,
and
the
bottom
picture
shows
that
we
have
traditional
TCP
flows
coming
then
we
have
a
new
TCP
coming
we
caught
the
IP,
then
we
can
see
that
the
new
TCP
will
take
over
the
boundaries
from
the
traditional
TCP,
which
means
that
it
gives
a
higher
priorities
to
the
new
TCP.
Because
this
is
it's
just
a
concept
demonstration.
We
show
that
the
network
is
being
heavily
congested.
Status.
Ingress
rate
is
a
4k
at
the
us,
only
a
hundred
Mac.
H
So
next
page
this
is
a
Latin
see
a
picture
left.
One
top
left
one
is
a
new
TCP
and
the
right
wines,
Oh
TCP.
The
bottom
line
is
a
to
picture
mer
together.
So
we
can
see
that
the
probability
for
the
new
TCP
can
reach
as
low
as
one
millisecond,
but
because
of
our
measurement,
we
just
make
the
queue
pretty
big
to
measure
measure
it,
because
we
use
a
very
old
technology
to
measure
user
pin
so
in
theory
that
real
latency
for
the
duty
City
can
be
much
lower
than
this.
H
So
we
can
see
that
just
a
separate
two
type
of
flows,
we
can
obtain
much
more
much
better
latency
next
page.
So
this
is
the
most
concerned,
part
scalability
and
the
performance
for
scalability,
because
at
the
beginning
we
design
this
protocol
for
the
applications
that
normal
TCP
cannot
support,
which
means
for
normal
TC
application
such
as
a
web
browser.
We,
don't
you
don't
need
to
use
this
technology.
H
The
application
uses
technology
only
is
that
when
they
want
very
high
bandwidth
or
very
low
latency,
for
example,
if
a
one
amp
you
support
one
or
a
hundred
Apogee,
then
we
divided
50%
for
the
new
or
a
TCP
session.
Then,
if
we
each
TCP
session,
support
100
Meg,
then
we
only
need
to
support
500
in
how
environment
allows
imputing
support
about
thousand
so
which
means
the
scalability
is
another
problem,
because
in
the
real
router
the
MPO
is
the
kind
of
per
port
based,
not
so
persistent
like
a
contra
city,
so
for
performance.
H
In
our
testing
we
measured
about
one
or
two
milliseconds
of
per
harper
sizing.
When
the
singling
message
come
in,
we
will
need
about
one
millisecond.
Let's
assume
it's
a
10
millisecond.
Aha,
then
30
to
hop
only
is
300
millisecond,
it's
a
still
acceptable
for
the
new
technology,
because
compared
with
Ric
VP,
for
example,
is
much
much
faster.
Next.
H
Here
is
the
contents
from
here.
It's
not
in
the
draft,
because
the
draft
is
about
the
framework,
not
focusing
the
congestion
control,
but
here
because
there's
a
community
I
just
briefly
introduced
what
we
do
for
the
congestion
control
for
this
regard.
So
even
the
network
so
network
can
provide
some
currently
service.
It
doesn't
mean
the
congestion
control
is
a
three
because
normally
the
real
implementation
that
we
can
achieve.
H
The
CR
C
is
the
committee
information
rate,
but
we
don't
currently
the
hardware
resource
for
the
PIR
lisabean,
a
normal
implementation
to
make
the
if
utilization
higher
for
the
networking
sauce.
So
what
if
we
currently
Cir
and
the
not
PIR,
we
still
have
to
do
some
congestion
control
so
basically
for
the
congestion
control
below
is
what's
the
sender
should
do
with.
We
were
measure.
The
RT
key
then
calculate
two
windows.
One
is
the
minimum
window:
one's
maximum
wind,
of
course,
onion
tocr
and
the
PR.
H
Then
we
will
choose
the
minimum
one
to
be
the
effective
window
to
send
a
data
packet.
This
is
the
window
we
still
have
to
use,
but
given
the
algorithm
next,
but
for
this
red
control
we
still
have
to
change
so
basically,
for
example,
if
Li
you
have
cr-z,
you
can
start
the
traffic
by
say
our
directory,
it
only
to
go
to
the
slow
start
and
the.
H
H
According
the
different
traffic
rate,
according
and
also
congestion,
control
is
used
for
the
14th
packet
loss
in
distinguishing
because
Ryan
okay,
glossing
liner
in
the
algorithm
is
used
for
the
detection
of
the
congestion,
but
with
our
Hardware
involvement
we
can
have
a
better
distinguishing,
for
example,
one
can
detect
the
remain
bandwidth
and
the
buffer
tabs
and
the
buffer
read
treasured
and
all
those
information
can
be
proto
sauce.
Then,
if
Peggy
lost
after
om
powerful
read,
which
means
it's
really
like
caused
by
congestion,
not
biphasic
failure.
H
Okay.
So
if
a
packet
lost
due
to
time
out,
it's
very
likely
caused
by
the
a
physic
failure
is
a
link
or
network
so
for
the
congestion
and
failure
action.
We
have
a
lot
of
things
to
do,
for
example,
congestion
loss.
We
can
have
the
same
as
a
traditional
TCP
that
the
source
can
reduce
the
traffic,
but
the
is
not
by
half,
but
you
can
reduce
directly
to
the
minimum
window
and
if,
with
the
packet
losses
of
random
physics
failure,
then
we
don't
need
even
to
reduce
the
rate.
This
will
dramatically
improve
the
throughput
and.
H
If
it
is
physic
failure,
loss
then
sauce
and
can
reduce
the
window
to
one
means
that
network
for
some
problem
I
need
to
reset
up
my
session.
Three
central
means
that
obvious
endless
inventing
again
to
reprogram
pass
because
the
IP
pass
may
be
changed.
Then
we
have
to
send
the
data
as
minimum
as
possible.
H
H
The
IP
followed,
and
we
called
do
TCP
and
TCP
full
of
Sheila's
here,
the
same
queue
and
they
each
the
article
flow
configures
er
on
the
PIO.
So
the
system
we
account
is
a
resource
for
the
summer,
the
summary
of
OCR
Arsenal
accessorize,
some
reason.
Some
is
the
lessons
link
boundaries.
So
we
have
two
scenario:
congestion.
Why
insert
the
IP
congestion?
One
is
a
TCP
congestion,
so
best
hey
if
no
congestion
or
TTIP
flow
is
current
leader
to
obtain
the
minimum
bandwidth
of
CI
a--,
and
the
tcp
behavior
is
the
same
as
before.
H
But
tcp
bandwidth
is
that
the
link
bandwidth
excluding
the
real
bandwidth
used
by
the
IP.
So
when
the
IP
is
congested,
then
it's
a
different
therapy.
Only
currently
the
CR
means
that
each
flow,
as
well
as
your
your
flow
rate,
is
less
than
the
Cir
then
use
the
currently.
But
above
it
is
depend
on
how
much
bandwidth
remained
and
what
is
the
algorithm
used
we
can
have
for
either.
H
The
actual
rate
is
proportional
to
is
a
PIR
or
the
actual
rate
is
distributed
between
all
the
IPS.
Also,
if
a
tcp
congested
for
TCP
congestion
doesn't
impacted
the
IP,
because
the
totally
scheduling
and
queuing
are
totally
separate,
so
TCP
congestion.
Only
in
TCP
flows,
the
IP
still
cuts
whatever
they
want
next
page.
H
H
H
So
also
we
have
a
two
scenario:
wines,
no
congestion
and
the
wine
congestion.
When
the
d
IP
congested
them
with
the
IP
q
will
be
buta,
then
we
can
measure
the
latency
based
on
the
dynamic
queue
taps
and
the
TCP
congestion
doesn't
impact
the
IP
so
which
means
that,
no
matter
what
that
the
IP
inductance
is
still
measurable
next
page.
So
this
is
a
summary.
This
is
a
framework
of
these
newer
scheme
to
do
the
transport
service
with
Q
s,
and
this
work
is
also
part
of
et
si.
H
B
H
H
L
This
is
roland,
bleke,
rtic
syrup,
let's
approach
so
usually
signaling.
Try
to
be
simple,
has
some
problems
with,
let's
say
at
least
security
issues,
so
I
I
don't
actually
know
what
the
the
trust
model
is.
I
mean
I
did
a
lot
of
work
in
answers
back
in
those
days,
but
so
I
don't
see
how
actually
you
what
you
expect
from
the
endpoints
and
how
you
trust
the
marking,
and
at
least
one
functional
thing
that
worries
me
is
that
you
you're
not
passing
back
the
report
message
along
the
same
way
as
you
do
the
reservation
request.
L
So
if
you
have
let's
say
not
enough
resources,
let's
say
you,
you
asked
for
yes,
I,
don't
know
it
megabit
per
second
or
so.
But
then
you
can't
get
that
yes
and
usually
what
you
do
is
it
gets
decreased
along
the
path,
and
so
you
have
reserved
resources
at
the
beginning
of
the
path
and
you
want
to
make
them
freed.
If
you
only
could
reserve,
let's
say
four
megabit
per
second
up
to
the
end
right
I,
don't
see
how
that
happens
in
your
okay.
H
Here
we
were
definitely
we
which
hike
will
get
rid
of
the
complexity
of
ice
VP.
For
example.
The
reservation
as
I
said
the
reservations
that
we
keep
only
fixed
time
of
the
life
for
the
reservation.
If
a
note
that
a
packet
coming,
then
the
resource
will
be
released,
these
four
scenes
means
that
I
don't
need
to
come
back
to
reserve
like
ice
VP.
So
we
only
have
programming
one
direction
for
one
message:
we
don't
need
to
come
back.
The
compact
just
report
or
programming
state.
H
If
the
program
is
data
is
faired
than
the
source
we
decide
if
it
will
use
the
traditional
tcp
mode
or
we
reduce
the
bandwidth
requirement,
because
even
some
load
has
the
reservation,
but
that
reservation
we
have
become
after
no
packet
for
a
while,
okay
yeah
this
we
have
the
also
for
security.
So
if
we
we
do
have
some
simple
security
McClendon
in
the
in
the
singling-
and
we
can
currently
that
the
the
singling
message
is,
that
is
the
send
by
authorize
the
user.
That's
a
basic
security.
P
H
H
H
H
Rows
not
have
things
for
this.
We
recall
the
past
repair.
For
example,
usually
rewriting
happened,
the
owning
failure
or
whatever
failure.
Then
our
foreign
States
will
be
fared,
send
to
sauce
sauce
where
detected
the
source
will
try
to
resend
the
singing
message
length
will
repair
the
pass,
the
pathway
period
step,
if
you
like,
rewriting
five-minute
finish.
C
David
black
speaking
as
an
individual
I
think
you
may
have
answered
part
of
this
question
by
saying
that
right
now
this
is
scoped
to
a
single
operator
single
administrative
domain
..
Yes,
how
does
incremental
deployment
work?
Can
you
deploy
it
incrementally
or
does
the
operator
have
to
enable
this
on
every
second.
H
Definitely
the
in
the
draft
I
list
the
cup
of
the
issues
we
have
to
study
father,
multiple,
multiple
timing
and
heterogeneous
network,
all
those
things
we
can
study
father
but
but
for
multiple
domain.
Definitely
it
can
work,
think
about
the
old
TDM
service
or
ATM
service.
That's
a
lot
of
plumbing
issue.
It
works.
So
this
one
is
a
simpler
than
that,
because
this
one
doesn't
involve
any
new
protocol.
Wait
a
minute.
Roger
is.
C
An
awesome,
timeout
timeout,
you
know
I,
clearly
have
very
different
ideas
about
what
the
word
protocol
means.
I
think
I
see
three
new
protocols
in
order,
so
in
essence
the
question
is:
how
is
this
going
to
behave
if
part
of
the
network
supports
it
and
part
of
it?
Doesn't
then
the
source
of
a
low?
This
path
doesn't
support.
Okay,
so
I
think
you've
got
a
world
garden
system
here
and
I'm
and
I'm
I'm
a
pains
to
understand
how
this
would
get
deployed
on
the
Internet
in
general,
yeah.
G
All
right,
but
Briscoe,
I
I,
read
the
draft
and
I.
Don't
think
you've
addressed
the
question
of
buffers
that
are
at
layer,
two,
not
layer,
three
or
within
tunnels,
and
getting
these
signals
to
go
up
and
their
internals
yeah
I
mean
you
know,
there's
there's
a
lot
of
other
problems
with
this.
Yes,
yes,
but
that's
I,
think,
certainly
for
my
experience
of
just
getting
to
bits.
Yes
go
so.
H
Basically,
the
those
kind
of
things
this
relate
with
the
heterogeneous
network,
so
tunneling
or
l2
or
NPS
def.
We
need
to
have
some
into
working
sins
for
their.
This
one
is
just
the
first
step,
the
framework
for
the
pure
IP
network
in
one
atom,
it
straight
to
me
yeah
we
cannot
cover
or
not
of
other
you
should
be.
Even
this
is
more
than
30
pages,
so
yeah
all
those
issues
we
are
thinking
about
it.
B
H
H
B
H
B
H
B
H
B
H
In
the
UFC
we
every
three
months,
we
have
a
conference
just
like
this,
but
we
have
a
more
detailed
information,
much
longer
presentation
time
than
this,
so
there
we
have
a
lot
of
alliances
for
the
problem
and
for
the
even
the
political
issue,
like
the
network
neutrality,
because
this
design
target
definitely
consider
that
otherwise,
service
provider
won't
accept
it.
So
we
I
have
a
dieter
Ananas
for
that
and
for
this
draft
and
all
other
walks,
definitely
be
our
updates
for
the
Odyssey.
Okay.