►
From YouTube: IETF106-TSVWG-20191121-1000
Description
TSVWG meeting session at IETF106
2019/11/21 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
A
B
A
B
B
A
Let's
start-
and
we
are
going
to
be
short
of
time
in
this
session
because
we're
going
to
reduce
an
extra
and
presentation
which
we
have
to
skip
from
the
previous
session,
because
there
was
a
lot
of
discussion
there,
so
timekeeping
is
going
to
be
important
and
will
remind
people
of
that.
It
may
be
in
your
interest
to
make
sure
that
your
talks
allow
time
for
discussion
rather
than
get
cut
off
before
they
finish.
So
please
make
sure
that
you
keep
the
time
I'm
not
well.
This
is
covered
by
the
ietf,
not
well.
A
The
small
print
does
matter.
If
you
haven't
read
it,
including
the
anti-harassment
and
many
other
useful
things
that
are
written
in
here.
Please
have
a
look
at
it
and
we're
gonna
get
to
have
to
stop
again,
because
we
do
need
a
note-taker.
If
we
have
no
note-taker,
then
we
won't
be
able
to
proceed
with
any
other
discussion
with
intent
during
the
meeting,
and
that
would
be
pretty
sad.
So
please
can
we
have
a
volunteer
for
a
note-taker.
A
E
Just
want
to
make
a
request
since
I'm
taking
notes
when
you
come
to
the
microphone
to
speak.
Please
say
your
first
name,
your
last
name
and,
if
applicable,
the
company
you're
representing,
because
if
you
don't
I
will
be
trying
to
work
out
who
you
are
and
then
I
won't
listen
to
what
you're
saying
and
then
it
won't
be
in
the
notes,
game.
A
I'm
for
coordination,
Theresa
is
the
backup,
not
taker,
so
you
might
want
to
coordinate
with
each
other.
Okay.
So
thank
you
ever
so
much
for
volunteering
to
do
things
and
the
DC
CP
multipaths
will
be
presentation
written
here
and
again.
We
need
to
keep
to
agenda
time,
so
keep
keep
moving
and
l4s
will
be
discussed.
The
three
working
group
drafts
will
be
presented.
There
will
be
the
issues
discussed
from
following
that
and
there's
an
allocated
set
of
time
for
that.
We
will
use
that
time.
Please
focus
on
the
issues
and
SCE
will
discuss
these.
B
I
think
these
agenda,
slides
are
have
been
overtaken
by
events,
to
borrow
a
phrase.
No
so
I
believe
that
the
two
drafts
we're
going
to
talk
about
here
are
the
are
the
Martinez
XI
draft
and
the
cheap
nasty
queueing
draft.
The
Grimes
T
CPM
draft
is
tomorrow
morning's
adventure
and
T
C
p.m.
but
it's
been
included
here
on
the
agenda,
because,
if
you
understand
how
us
SCE
works,
you
really
probably
got
to
read
that
draft
for
completeness
David.
A
B
And
we're
gonna
try
to
see
there's
normally
the
last
half
hour
of
the
session.
Is
these
three
drafts
now
we
know
we're
going
to
get
very,
very
excited
with
the
agenda.
Items
have
come
just
before
this,
but
at
some
point
we
are
going
to
call
time
to
try
to
get
these
folks,
at
least
at
at
least
a
little
bit
of
time,
even
though
they're
not
going
to
get
their
15
minutes
of
fame.
A
F
A
I'm
Cory
I'm
proxying
for
Joe
touch
on
the
UDP
options.
Draft
and
I'm,
not
in
this
wonderful
location,
on
the
site
and
in
Singapore
UDP
options
is
a
working
group
draft,
it's
progressing
and
so
Luke
at
the
draft.
Please,
and
one
of
the
changes
was
there
was
a
proposed
change
of
status
to
BPS
rather
than
experimental
and
as
ever
the
chairs
have
looked
at
this.
But
the
final
decision
on
the
on
the
status
will
come
from
feedback
from
this
group
and
our
advice
from
our
ad.
A
But
there
has
been
that
change
so
be
aware
and
there
were
some
additions
and
these
four
additions
came
directly
from
the
working
group
meeting
last
time.
So
your
contributions
by
our
email
and
at
the
at
the
Mike
have
made
a
difference
and
the
jaws
updated
it
to
reflect
the
OCS.
The
SES
post
options
must
be
0
to
eliminate
the
idea
of
a
side
channel
and
extended
the
format
allowing
more
than
255
by
options
which
allows
for
padding
top
users
next
slide.
Please
David.
A
There's
discussion
about
pattern
versus
heart,
in
other
words
the
Paden
option
like
similar
to
what
we
have
in
ipv6
and
whether
we
need
that
and
Joe
told
me
in
the
discussion
before
we.
This
was
presented
that
nobody
really
had
commented
on
the
list
after
this
was
discussed
in
the
working
group
meeting
last
time.
So,
if
you're
one
of
the
people
who
thought
this
was
interesting
even
important,
then
please
get
back
on
the
list
and
comment
on
it,
because
if
George
doesn't
get
feedback,
it's
Joe's
opinion
that
nobody
really
cares
that
much.
A
So
if
you
do
care
you'll,
please
just
speak
up.
There's
a
fight
from
a
support
discussion
as
well,
and
people
thought
that
that
was
really
really
interesting
and
we
could
divide
the
option
space
numbering
perhaps
to
reflect
those
with
one
number
one
said:
maybe
the
higher
numbers
were
required
and
the
other
ones
were
not,
but
did
we
really
need
that
again?
There
wasn't
much
discussion
on
a
list
after
the
discussion
on
the
mic.
So
if
we
don't
get
feedback
and
discussion,
there's
no
way,
we
can
draw
consensus
to
add
new
things
to
the
document.
A
A
A
G
G
G
We
think
there
are
multi
connectivity,
architectures
already
specified
or
under
specification
like
three
TVPA
TSSs,
and
also
VBF
hybrid
access,
which
defines
multi
connectivity
between
CPEs
or
mobile
phones
and
carrier
networks,
and
we
see
they
have
a
strong
demand
on
multipath,
proper
call,
supporting
UDP
or
even
IP,
there's
already
the
multipath
tcp
out,
but
can
which
kinds
of
hot
tcp.
But
we
see
an
increasing
share
of
European
our
networks
because
of
the
quick,
and
we
think
it
would
be
good
to
have
multiple
support
for
this
as
well,
and
the
specific
architectures
like
ATS
protects
us.
G
We
selected
the
DCC
protocol
for
that
purpose
because
of
its
unreliable
maid
nature,
but
it
keeps
the
state
and
employs
negation
control.
What
we
think
is
it's
very
useful,
so
you
find
some
material
materials
which
point
this
out
in
detail.
Why
D
CCP
is,
from
our
view
the
right
protocol
in
in
slides
from
ITF,
104
and
105
I.
Think
that
is
the
framework
we
have
in
mind.
G
So
it's
mainly
relying
on
a
multipass
support
for
D
CCP,
extended
by
a
multi
path,
framework
which
more
or
less
means
we
have
virtual
network
interfaces
where
we
can
send
IP
or
UDP
traffic
in
its
encapsulated,
it's
transferred
over
the
multi-party
CCP
and
then
exit
the
virtual
network,
interface
and
receiver
side
again.
So
pretty
simple.
There
are
three
crafts
related
to
this
and
the
work
which
has
been
done
since
ITF
105
is
mainly
related
to
the
multi
path.
G
Dcp
trough,
the
the
other
two
crafts
are
more
or
less
unmodified,
so
in
the
next
slide,
I
will
explain
a
little
bit
what
has
been
done
in
the
multi
policy
CCP
protocol?
We
propose
also
we
worked
on
on
implementation
and
generating
results,
but
that
I
will
more
or
less
skip
today,
because
it
was
part
already
of
the
RCC
achieve
presentation.
G
One
important
thing
we
also
implemented
in
the
meantime
that
you
can
see
in
the
middle
of
the
slide:
EB
are
40
CCP
and
maybe
within
the
next
year.
We
can
also
make
this
open
source.
Let's
see,
okay,
now
coming
to
the
extensions
we
did
through
the
multi
pass,
tcp
draft,
it's
now
version
3.
We
added
a
multi-part
capable
feature
to
the
DC
CP
and
also
multi-part
option
to
exchange
multi-part
or
specific
information.
G
We
defined
several
multiple
options
for
setting
up
a
multiple
TCP
connection
securely,
adding
or
removing
sub
flows
securely
and
every
receiver
side
reordering,
which
is
one
of
the
main
critical
points
for
an
efficient
implementation.
We
think
and
also
giving
passed
priority,
so
you
can
see
on
the
right
side
which
options
we
have
defined
so
far,
and
also
we
defined
the
handshake
procedure.
I
will
not
go
through
it
in
detail.
It's
pretty
similar
to
to
multiple
TCP.
We
think
that
in
multiple
TCP
they
were
was
defined.
G
Okay,
so
that
is
the
part
where
I
usually
plan
to
present
some
updates
on
our
implementation
and
results.
I
will
go
through
it
quickly
in
the
past
or
in
previous
time,
and
we
already
proved
that
there's
a
possibility
for
seamless
hand
over
keeping
IP
connectivity
when
we
switch
from
one
access
to
the
others
that
you
find
in
the
slides
from
ITF
105.
G
We
also
proved
that
it
makes
sense
to
have
a
reordering
on
receiver
side
again
the
details
you
find
in
the
slides
for
my
ITF
105
and
what
is
new
now
and
I,
already
explained
that
a
sec
Archy
that
we
made
a
setup
for
testing
real
traffic.
In
that
case,
we
tested
with
YouTube
using
quick
transmission
over
or
through
our
multi-party
CCP's
that
are
be
tested
under
various
path
conditions.
G
G
We
can
also
deal
with
change
in
path
characteristics,
so
in
this
example,
which
we
combine,
two
paths
with
one
megabit
and
we
changed
on
the
one
one
of
the
party,
the
latencies
after
60
seconds
and
in
the
results
you
can
see
the
the
blue
dots
that
is
single
path
or
one
megabit,
only
and
all
the
other
results,
red,
green
and
grey
that
are
different
modes
of
our
multi-party
CCP's
setup
using
different
scheduler
using
reordering.
So
we
have
the
SRT
T.
That
is
pure
scheduling,
preferring
the
path
with
the
lowest
latency.
G
Our
implementation,
because
all
the
tests
we
did
so
far
we're
relying
on
the
CCI
d2
and
we
think
with
BB
our
we
have
some
some
benefits
over
CCID
too,
but
I
think
that
people
see
next
time
conclusion
the
next
steps,
so
our
prototype
implementation
and
simulations
we
did
so
far
show
very
good
shows
very
good.
First
results
according
to
the
demands
of
steering,
switching
and
splitting
or
three
chip,
yes
and
hybrid
exist,
defined
at
the
proton
forum.
The
key
components
we
identified
for
efficient
multi-part
is
scheduling,
congestion,
control
and
reordering.
G
We
see
a
possible
interference
between
the
congestion
control
employed
in
the
multiple
TCP
setup
and
congestion
control
of
piggyback
traffic
that
we
addressed
this
time
at
OCC
Archie,
so
I
I
will
start
some
discussions
on
that
on
the
mailing
list.
Soon
we
have
now
a
PBR
implementation
inside
the
Linux
kernel
for
DC
CP
available,
and
we
expect
very
good
results
using
DB
r,
instead
of
CCID
to
related
to
to
multi
pass
purposes.
G
Several
discussions
with
operators
and
vendors
have
been
initiated
now
in
in
the
context
of
3gpp
standardization,
where
we
may
very
clear
that
we
believe
that
it
requires
multiple
support
for
the
increasing
UDP
share.
That
does
not
mean
that
it
ends
up
in
a
multi-party
ccp's
that
have.
It
could
also
be
some
something
else,
but
at
least
to
make
the
ITF
community
aware
that
there
is
something
needed
for
that
reason.
G
H
G
G
H
G
I
Andrew
McGregor,
firstly,
I
wouldn't
quite
characterize
the
way
PPA
goes
wrong
is
in
exactly
that
way,
but
the
point
remains.
It
does
and
strange
things
happen
with
small
numbers
of
flows
and
certain
certain
networks,
so
yeah
it
it'll,
be
a
bit
weird
NB
b
r2
is
better
I
think
this
is
valuable
work.
There
are
plenty
of
places
where
something
like
this
could
be
useful,
so
yeah.
A
B
Next
presentation,
I'm
gonna
inject
one
last
closing
comments
as
I'm
sitting
up
here.
I
want
to
strongly
encourage
the
work
on
what
is
referred
to
as
as
congestion
of
piggyback
traffic.
This
phenomenon,
where
we
see
stacked
or
nested
congestion
control
loops,
is
turning
up
in
multiple
areas
of
the
IETF
and
some
progress
and
figuring
out
what's
safe
and
what
isn't
will
be
very
helpful.
G
J
A
J
That
was
a
different
meeting,
so
it's
ultra-low
queuing
delay
for
all
the
internet
applications
and
including
capacity
seeking
ones.
So
that
means
you
can
have
high
throughput
and
very
low
delay,
and
the
trick
is
to
use
congestion
control
with
small
teeth.
What
up
shallow
queuing
delay
means
is
around
a
millisecond,
but
also
it's
very
important
to
get
there
the
high
percentiles
low.
So
it's
got
to
be
consistent
for
real-time
applications,
because
otherwise,
if
some
packets
are
late,
they
buffer
to
the
to
the
longest,
not
the
shortest
and
without
looking
at
all
the
data
in
here.
J
J
But
I
won't
go
into
the
exact
details,
because
they've
got
a
lot
more
to
do
today,
and
the
final
introductory
recap
slide
is
that
those
scalable
congestion
controls
that
allow
you
to
get
that
low
delay.
They
also
appear
to
be
very
aggressive
to
a
TCP
friendly
flow
if
they're
in
an
ACN
queue
together.
J
So,
but
if,
if
they're
in
a
non
easy
NQ,
that's
fine
because
you
get
a
loss
and
and
the
you
can
deal
with
it,
but
we've
got
a
transition
mechanism
so
that
this
new
traffic
can
be
isolated
in
another
queue.
Either
a
dual
queue
system
has
drawn
there
or
a
per
flowing
queue
which
I
didn't
draw.
Cuz
I
ain't
got
that
much
pencil
and.
J
The
other
transition
mechanism
is
a
packet
identifier
to
be
able
to
isolate
these
flows
in
this
dual
queue
system,
because
you
can't
do
it
with
the
flow
IDs
when
you're
trying
to
just
do
it
for
two.
We
don't
want
to
do
it
for
flow
IDs
because
you
might
want
to
do
in
encrypted
VPNs
and
things.
And,
finally,
the
picture
shows
they're
they're
transitions
for
the
UCAN
code.
Point
we're
proposing
to
use
for
this
solution.
J
J
Greg's
done
this
useful
slide
for
me:
I
no
longer
consult
for
cable
labs,
89
individuals
representing
those
companies
there
I
won't
read
them
all
out.
You
can
see
them
on
the
screen,
so
that's
I,
don't
know
something
like
20
companies
is
it
very
I
mean,
including
the
12
yeah
more
than
20
yeah
integrated
the
offer
SQL
queue
into
the
DOCSIS
3.1
Mac,
and
also
develop
two
concepts.
J
In
fact,
if
you,
if
you
go
to
these
slides
online,
there
are
links
for
things
like
the
non
queue
building
traffic
and
the
queue
protection
algorithm
and
all
the
features
are
managed
to
be
implemented
by
a
firmware
update.
So
no
hardware
updates
to
cable
modems,
so
it
can
all
be
installed
on
existing
modems
and
the
news
this
time
around
there's
been
this
respects
we
published
in
January.
We
told
you
that
and
they've
been.
A
A
J
J
J
So
the
next
update
is
that
the
nokia
nokia
have
implemented
a
Wi-Fi
product
with
l4s
in
the
in
the
stack,
and
that
is
due
for
product
in
q1.
2020
they've
got
other
devices
planned
later
around.
That
idea.
It
uses
the
dual
pi/2
technology,
which
is
in
one
of
the
appendices
of
the
your
Q
document,
and
that
was
demonstrated
at
broadband
World
Forum
just
a
few
weeks
ago
and
Nokia
one
you
to
know
that
if
you
want
to
have
some
trials
on
this
and
I
hope
it's
okay
to
advertise
this
contact
there.
J
Architecture
in
this
essay
to
working
group
from
and
the
proposal
was
to
Ericsson
sprint,
Google,
Nokia
and
AT&T
and
Vodafone
for
enhancements
for
advanced
interactive
services
AIS
and
that
included
again
a
requirement
for
both
low
latency
and
high
throughput
for
those
sort
of
applications.
So
that
there's
been
a
number
of
attend
to
get
ecn
supported
in
the
system
architecture
and
we're
hoping
that
this
one
will
succeed.
There's
no
guarantee
it
will
there's
a
hell
of
a
lot
of
things
trying
to
get
into
3gpp.
But
we
shall
see.
J
J
Most
of
the
other
ones
was
BB.
Our
v2
is
coming
along.
The
alpha
release
and
I
think
all
the
other
ones
have
been
there
already,
but
they're,
obviously
being
updated.
All
the
time
and
I
showed
this
on
Monday,
so
I'll
skip
very
quickly
through
it.
This
is
stages
against
the
product
requirements
as
it
was
in
July,
and
this
is
as
it
is
now
and
you'll
see
it's
nearly
all
greening.
J
J
J
Yeah,
so
that's
the
Co
point
schemes
along
the
top
to
show
the
difference
and
I
just
wanted
to
sort
of
put
the
main
issues
there
that
the
primary
issue
with
l4s
is
that
scaleable
flows
with
dominate
classic
flows
in
a
classic
EC
naq,
if
you
didn't,
have
FQ
or
some
form
of
flow
queueing,
with
the
condition
that
such
things
are
deployed
somewhere.
And
if
the
fallback
scheme
that
we've
developed
doesn't
work
and
I'm
going
to
talk
about
that
in
the
issues.
Time.
A
A
A
L
I
relay
something
from
Jabra
quickly,
because,
yes,
okay,
so
Ingham
up
with
some
Java
and
he
said,
reporting
back
from
the
3gpp,
the
sa-2
contribution
was
just
discussed
and
we
know
the
reception
was
overall
positive
and
some
additional
proponents
with
some
pushback.
It's
currently
unclear,
which
timeframe
Ellis
for
support,
will
be
added
in
standards.
We
will
have
to
wait
for
debriefing
after
next
week
after
the
meeting
was
mo,
deters
what.
B
A
D
D
M
So
starting
off.
You
know
it's
important.
We're
not
doing
l4s
for
proposed
standard
or
standards
track
were
we're
specifically
targeting
informational
and
experimental
RFC's
is
plenty
of
RFC's
exists
similar
this
already.
So
this
is
not
like
we're
blazing
a
new
trail
or
anything.
There
are
rfcs.
We
can
look
at
for
sort
of
level
setting
guidance
on
what
the
expectations
should
be.
M
So
in
quick
start
the
routers
are
signaling
some
information
about
how
fast
the
flow
can
go,
and
it
also
has
an
host
behaviors
too,
and
so
l4s
is
a
little
bit
of
an
odd
thing,
because
it's
the
drafts
that
we're
working
on
in
TS
bwg
right
now
are
primarily
the
in
network
behaviors.
And
then
there
are
some
requirements:
the
the
prague
tcp
prague
requirements
that
flows
there
using
the
elf
forest
idea
are
supposed
to
meet,
but
we're
not
actually
working
out
and
the
drafts
that
describe
a
protocol
doing
all
this
stuff
at
the
moment.
M
So
separating
TCP
Prague
from
l4s
a
little
bit
is
different,
but
between
this
working
QuickStart
in
essence,
though,
the
goal
of
doing
experimental
RFC's
on
congestion
control
is
always
to
not
break
the
internet,
so
you
know
we're.
We
worked
the
RFC
s
and
there
are
some
open
issues
and
questions
usually
in
in
these
things,
and
that's
why
we're
going
for
experimental
rather
than
proposed
standard,
but
so
it's
important
that
we
have
consensus
and
there
are
things
that
are
in
place
in
the
RFC's
that
make
us
feel
like
we're
not
going
to
break
the
internet.
M
So
if
the
experiment
goes
badly,
you
know,
there's
there's
a
mechanisms
that
are
available
to
operators
and
system
ends
to
see
this
going
badly
and
put
the
brakes
on
it.
We
need
to
maybe
do
a
little
bit
more
work
on
that
in
the
current
drafts.
I
think.
But
this
is
the
point
of
doing
the
work
out
in
the
open
in
the
IETF.
The
way
we're
doing
it.
There
have
been
many
things
not
going
in
the
ITF
that
went
badly
so
like
do
you
old,
bittorrent,
old
RealPlayer
are
a
couple
of
examples.
M
N
Jake
:,
so
do
we
have
any
kind
of
what
normative
or
you
know,
prior
consensus
on
what
the
word
break
here.
It
means
because
I
think
yeah
I
think
it
will
probably
find
easy
consensus
that
this
will
not
cause
congestion
collapse.
If
that's
the
definition
of
break
but
I'm,
not
sure
you'll
find
consensus
that,
like,
for
example,
that
is
some
of
the
tests
have
brought
up
right.
The
you
know
the
the
existing
deployment
of
like
gaming
routers
that
promise
to
provide
low
latency
already,
and
do
so
through
these
fake.
N
Some
of
these
there's
a
pretty
strong
reason
to
believe
in
some
cases
that
that
some
of
the
observed,
l4s
behaviors,
may
introduce
like
persistent
and
frequent
queuing
delay
to
these
systems
that
were
specifically
designed
to
try
to
avoid
such
things
right.
So
this
is
the
kind
of
like
does
that
count
as
breaking
or
not
so.
M
N
N
L
A
cool
event,
so
my
understanding
was
in
this
group.
We
rather
specify
requirements
for
congestion
control
and
then
the
extra
congestion
control
work
would
potentially
go
to
the
group.
That
also
cares
about
the
protocol.
So
okay,
you're
nodding,
that's
great,
but
that
also
means
like
breaking.
The
internet
is
not
like
the
big
issue
here,
because,
like
the
requirements
itself
would
not
break
the
internet,
I
hope.
M
K
M
The
URL
here
you
can
through
on
your
computers
and
go
visit,
but
one
thing
to
keep
in
mind
is
we
want
to
the
main
discussion
on
the
mailing
list,
not
go
back
and
forth
in
the
trek
tool
with
comments.
So
it's
the
intention
is
to
use
the
tool
to
record
like
the
main
status
and
then
when
there
are
a
bunch
of
webpages
and
papers
and
things
that
we
want
to
refer
to
are
quite
towards
closure
or
progress
on
the
issues.
We'll
do
that
in
the
tracker,
but
don't
go
back
and
forth.
M
There
use
the
mailing
list
and
then
don't
know.
The
next
page,
I
have
sort
of
a
stoplight
chart
that
I
wanted
to
get
out
to
the
loop,
which
is
my
interpretation
of
the
status
that
each
of
the
issues
is
that
at
the
moment,
and
whether
or
not
we
look
like
we're
about
a
document
revision
away
from
being
ready
for
workgroup
platform
and
so
like,
and
for
this
meeting
start
closing
because
you
in
track
there
are
some
that
were
obtaining
enough
agreement
on
that
sufficient
work
and
other
sufficient
I
got
it.
M
If
it's
made
that
we
don't
fix
you
in
public
18
anymore,
it's
not
indicating
that
every
problem
is
solved
and
that
this
is
a
perfect
protocol
for
all
pieces
on
the
internet.
It's
indicating
that
it
looks
like
we've
met
the
criteria
for
experimental
arms.
These
basically-
and
it's
just
because
I
think
through
in
closing
issue
that
we
never
again,
it's
still
perfectly
valid
to
bring
a
you
know
your
request
call
at
any
time
on
the
mailing
list
or
in
any
other
venue.
M
It's
just
I
make
something,
because
it
looks
like
we're,
trending
toward
being
good
and
then
I
wanted
to
say
is
co-chair
and
Shepherd.
This
is
a
little
bit
confusing
at
the
moment,
because
we're
talking
about
the
elf
forest
experiment
and
it's
cues
of
the
east
of
t1
code
point
and
we
may
have
another
set
of
specs
using
either
t1
code
point
and
we
discussed
both
area
directors.
We're
not
planning
on
like
holding
up
for
us
publication.
N
Question
from
Jake
or
not,
you
know,
just
a
sorry,
Jake
Island
again
just
a
quick
clarification
about
the
back
and
forth
on
the
on
the
on
the
issues.
Lesson
so
I
thought
part
of
the
point
of
the
issues
list
was
that
things
were
getting
dropped
in
these
sort
of
luminous
email
threads
and
that
the
the
goal
was
to
capture
the
main
to
be
to
be
tracked
with
the
issue
itself.
So
I,
just
although
I
am
totally
in
agreement
that
we
do
not
want
circular
arguments
on
these
threads
and
to
make
them
also
equally
useless.
M
Helpful
people
have
been
adding
things
I
actually
intended
to
add
some
things
myself
and
then
I'd
visit
an
issue
and
I
saw
Oh.
Some
people
have
already
added
extra
stuff
to
that.
It's
been
very
helpful.
What
people
have
done
there
I
just
wanted
to
make
sure
that
if
everyone
in
the
room
starts
clicking
through
that,
we
don't
all
feel
the
need
to
add
our
two
cents
to
the
I.
A
M
M
So
you
have
these.
This
is
my
stoplight
chart
on
the
issues
and
it's
again
my
own
thoughts
about
whether
we're
about
green
means
were
about
a
document
revision
away
and
from
what
I
could
tell
yellow
means
sort
of
like
I
really
don't
know
if
we're
making
progress
and
red
I
was
sort
of
it
didn't
look
to
me
like
we
had
made
progress
so
I,
don't
know
if
it's
readable
in
the
room.
It's
barely
legible
through
me
deco,
but
I
want
to
get
this
out
there
to
the
working
group
because
at
least
summarizes,
but
I'm.
M
M
The
one
thing
I
think
is
worth
noting
is
that
a
lot
of
the
issues
seem
like
they're,
pointing
to
number
16
now
like
the
classic
I
can
interaction
that
seems
to
be
sort
of
a
crucial
issue
that
several
others
hinge
upon.
So
I
think
when
Bob's
talking
about
that
today,
we
want
to
see
if
that's
going
in
a
good
direction.
A
Okay,
so
notice
the
point
when
we're
going
to
discuss
issues
just
before
we
do
to
everybody
signed
the
blue
sheets.
If
y'all
signed
the
blue
sheet,
we
put
your
hand
in
there.
Okay,
can
we
find
the
blue
shapes?
Thank
you.
Let's
carry
on
look
so
we're
going
to
discuss
you.
Sixteen
was
if
anybody
would
yeah
or.
O
P
J
A
J
J
So
this
shows
something
like
between
4
and
16
times,
fairness
and
it's
difficult
to
know
whether
that's
acceptable
or
not,
and
it
sort
of
depends
whether
this
is
happening
like
all
over
the
internet
or
in
one
or
two
corners,
the
internet.
Maybe
yes,
it's
still
a
pain
for
that
person.
It's
happening
too,
but
it's
also
only
happening
when
two
long-running
flows
are
going
together
and
they're
also
two
different
flows.
So
you
know
it's
it's
it's
not
good,
but
it's
it's!
It's
not
starvation,
they're,
both
stable.
J
They
both
proceed
and
also
the
other
interesting
fact
about
this
is
that
as
the
capacity
for
each
flow
gets
smaller
or
as
there's
more
flows,
the
capacity
per
flow
gets
smaller.
They
get
more
equal,
so
yeah,
it's
it's
not
good,
but
it's
whether
we
live
it
to
the
floor
to
decide
whether
it's
acceptable,
yep.
I
N
N
N
The
samples
okay,
so
we
don't
really
know
whether
the
it's
only
where,
where
space
is
left
that
okay,
this
might
be
her
further
examination.
The
other
one
I
wanted
to
mention
was
that
I
thought
the
guidelines
from
2914
suggested
that
fairness,
a
goal
for
you
know,
different
ideas,
was
within
an
order
of
magnitude.
Some
of
these
data
points
across
the
line,
but
you
know
stuff
them.
Don't
yeah.
A
Q
Marku
coil,
so
this
is
when
digi,
TCP
and
cubic
is
competing
on.
Okay,
so
do
you
have
the
resource
there
same
number
of
Cuba?
For
you,
big
flows
are
just
alone
without
DC
TCP.
So
that
is
what
you
should
compare
to
the
results
in
these
slides
for
cubic
to
see
better.
There
is
for
cubic
yeah
I
didn't.
Q
J
J
Yeah
so
and
it's
the
same,
make
same
queue.
So
if
there
were
two-
and
these
are
just
one
one
against
one,
so
if
there
were
two
cubics
they're
gonna
be
pretty
much
on
on
the
one
line.
The
the
the
number
one
by
the
way
is
when
says
normalized
rate
per
flow.
That's
what
you
should
get
if
they
were
equal.
C
Q
J
A
A
N
N
J
J
S
S
What
are
we
trying
to
accomplish
by
doing
all
of
this
work?
You
asking
me
well
I'm
I,
guess
I'm
asking
folks
who
are
asking
for
more
evaluations
and
also
for
the
from
the
chair
cause.
That
is
an
issue
now.
Is
this
something
that
the
question
here
in
the
room
is
not
so
much
I
you
put
these
two
flows
in
a
single
cue,
DC,
DC
P
is
going
to
kill
the
other
one,
fine.
How
how
slowly
does
it
die
or
how
badly
is
it
hurt?
Is
that
an
interesting
question
for
us
to
be
asking.
J
I
do
want
to
just
point
out,
as
I've
said
before,
that
starvation
usually
means
something
ratchets
down
and
down
and
down
and
gets
nothing
yeah.
This
is
they're
actually
reaching
a
stable
state
these
two
and
there
is
progress
in
both
of
them.
So
it's
not
killing
it.
I
know
I
know,
as
I
said
it's
for
four
to
four
at
the
ends
and
about
16
in
the
middle,
but
you
know
we're
talking
here
like
the
forty
Meg
in
the
middle,
so
okay,
no
one
sixteenth
or
forty,
so.
A
On
insignificant
amount,
so
rather
than
discuss
whether
we've
got
that
is
there
another
piece
of
data
that
we
really
want
to
look
up.
If
there's
easy
things
to
do,
we
can
add
it
to
the
spider.
We
can
finish
this
discussion.
If
there's
something
very
specific
that
people
asking
for.
Please
ask
but
Andrews
question.
A
J
This
is
the
really
the
only
intractable
issue
with
l4s
that
SCE
was
invented
to
solve
you
know
and
if
that's
not
an
important
problem
then,
but
how
do
we
know
what's
the
consensus
on
whether
this
is
important
or
not,
you
know
where's
the
line
whatever
where's
our
target.
What
are
we
trying
to
do?
I'm
sure
Wes
will.
A
I
But
I
do
have
something
I
want
and
romário.
Firstly,
I
do
have
something
I
want
to
point
out
was
that
if
we
do
this
exercise
for
cubic
vs
cubic
or
Reno
versus
Reno
we're
going
to
get
numbers
that
are
somewhat
less
than
1,
because
they
will
fail
that
will
link
so
we'll
get
numbers
somewhere
between
about
1/2
and
about
0.7.
So
that's
really
the
bar
right.
That's
the
really
the
bar
for
it's
actually
working.
Ok,
here,
we're
filling
the
link,
so
we
get
more
total
close
to
feeling
like
you
can
ever
fill
it
completely.
I
T
Right
so
Peter
heist
from
the
SAE
team
I
wanted
to
answer
the
question.
Why
we're
doing
this
experiment?
And
that
is
because
there
are
many
RFC
3168,
ATMs
and
deployment
today.
So
we
want
to
demonstrate
what
is
actually
going
on
so
that
we
we
show
and
not
just
talk
about
what
what
could
possibly
happen,
because
it
is
a.
It
is
a
serious
issue,
but
I
did
want
to
ask
I
didn't
know
if
it
came
up
in
this
issue
or
17.
J
Three
or
four
slides
about
this,
this
is
just
the
the
problem
slide.
Now
we
then
we
got
the
solution
coming
up,
but
but
I
do
want
to
pick
up
on
you
saying
there
are
a
lot
of
3168
rotors
about
we're
talking
here
about
single.
These
are
all
in
a
single
queue.
If
it's
an
f2,
you
don't
get
this
problem
at
all.
N
J
:,
just
as
to
the
point
Jonah
asked
about.
Why
do
we
want
to
know
this
I
would
like
to
say
that
I
am
far
more
interested
in?
How
well
does
the
does
the
classic
you
detection
work?
If
you
would
do
that
instead
and
show
that
this
goes
away
when,
when
it's
functional,
then
it
would
make
this
moot
I
think
what
the
serves
as
the
baseline
in
case
it
doesn't
and
to
be
able
to
show
the
difference,
and
we
don't
actually
need
it.
If
we
don't,
if
the,
if
the
classic
you
detection
works
for
you.
U
J
J
A
S
Looks
very
eager
that
little
blue
ting
is
really
moving.
I
I'm
I'm
going
to
say
that
it's
it's.
It's
always
interesting
for
me
to
hear
about
EC
and
deployment,
because
everybody
thinks
it's.
It
seems
to
me
that
it's
it's
super
difficult
to
actually
find
out.
What
is
in
deployment
looks
like
so,
if
actually
somebody
has
numbers
or
has
experience,
I
would
love
to
see
that
and
I
am
curious,
see
if
you've
seen
this
in
the
past,
but
I'd
like
to
know
that
that's
on
the
next
slide,
that'd
be
useful
yeah,
but.
M
A
M
A
key
is
to
see
if
people
agree
that
most
of
the
work
here
is
actually
in
tcp,
proc
on
and
hosts,
and
that
as
a
florets
is
concerned,
we
need
just
sort
of
the
existence,
proof
that
is
possible
to
do
the
classic
bottleneck,
detection
and
and
fallback,
and
whatever
that
this
is
enough
to
make
us
feel
comfortable
going
forward
with
the
elf
forest
in
network
behavior
RF.
These.
N
J
holin,
if
we
had
such
a
proof
that
would
be
adequate
for
me.
I
would
that's
that's.
That
is
the
open
question
to
me.
It's
like.
Can
we
in
fact
meet
the
the
TCP
proud
or
sorry
the
fr-s
requirements
with
anything
one
example
of
which
could
be
TCP
product?
We
haven't
seen
that
yet
that's
my
objection
under.
A
J
This
slide
is
holding
off
one
more
slide
until
we
get
to
the
design
of
that
fallback.
Just
to
talk
about,
because
this
issue
is
both
about
the
prevalence
of
the
problem
and
and
whether
there's
a
solution
and
where
we
asked
Padma,
who
presented
this
stuff
in
a
map
RG
some
years
ago
now
to
dig
into
the
data-
and
she
did
remember
that
she
had
looked
into
the
highest
point
there-
the
thirty
percent
in
Argentina,
and
and
traced
it
to
a
particular
ISP.
J
J
J
A
A
O
O
J
J
J
J
J
Yeah
so,
and
in
fact,
in
Pete's,
original
measurements,
BC
TCP
didn't
have
the
problem
and
TCP
Prague
did,
and
it
was
because,
when
introduced
to
bargains
he's
be
proud,
which
once
once
we
realized
those
two
differences
immediately
found
it.
And
then,
if
you
look
at
the
next
book
flop,
it's
you
can
just
see
yeah.
J
That's
it
a
little
bit
further
down
anyway,
you
see
the
the
delay
is,
on
the
right
hand,
side
if
you
can
read
that
it's
about
that
spike
is
about
hundred
fifty,
but
the
rest
of
it
is
about
I,
don't
know
five,
maybe
on
the
TCP
flow
and
probably
anything
on
the
ICMP
and
when
the
ICMP
is
of
the
T
will
be
okay,
going
back
into
FIFO
and
in
the
next.
So
essentially
the
gone
away.
And
if
you
look
at
it
the
next
plot,
it
shows
essentially
other
congesting.
M
R
G
R
J
J
That's
not
as
go
on.
No
we've
got
pace,
chirping
and
excuse
me
place.
Chirping
is
also
when
the
congestion
level
disappears
mid
congestion
avoidance
and
also
we're
working
on
a
caster
controller
instead
of
the
WMA.
Well,
it's
in
addition
to
the
WMA.
It's
a
bi
controller
in
within
they've
sent
TCP,
which
I'm
not
just
working
on
it.
It's
implemented
and
the
guy
who
wrote
plays
chirping
is
implementing
that
because
he
wants
in
order
to
come
in
a
fast.
The
other
flows
have
to
go
out
faster
and
DCT
speeds
very
slow
at
doing
that.
N
M
V
Craig
white
cable
is
I,
think
issue
referring
to
it's
the
same
one
that
Johnson
Morton
is
bringing
out
that
if
T
CV
Prague
is
in
the
steady
state
with
a
small
value
of
alpha
and
in
an
F
Q
system
and
another
flow
starts
up,
that
the
FQ
coddle
doesn't
give
enough
of
a
congestion
signal
to
meet
the
expectations
of
TCP
Prague
and
calls
it
to
reduce
congestion
window
so
quickly.
It's
physically,
but.
N
V
N
A
I'm
not
the
conclusion
thing
is
Bob's
take
to
reach
a
conclusion,
not
the
chairs,
determination
that
that
has
happened
and
the
other
question
I've
got
a
small
question
here.
You
mentioned
the
use
of
paste
trapping.
Is
it
possible
to
find
a
method
that
doesn't
use
paste
chirping
for
those
people
who
have
them
off
loads
and
other
things
where
that
might
be
hard?
A
J
What
what
your
PO
is
doing
at
the
moment
is
he
well
he's
currently
up
streaming,
Accra
ACN,
that
the
next
job
is
to
look
at
whether
pays
chirping
is
is
practical
or
whether
we
go
for
something
less
ambitious
as
a
sort
of
first
step
that
gives
you
a
better
start
up
and
and
and
faster,
but
not
you
know
not,
as
necessarily
as
good
as
pace
chirping
or
maybe
we
use
poast
chirping
when
it
works
and
something
else
otherwise
all
but
I
mean
it
will
always.
It
will
always
falter.
A
Would
encourage
you
to
look
outside
of
the
pace,
trapping
thing
for
something
that
the
ietf
can
standardize
and
then
explore
pace
tripping
where
you
can
deploy
it
and
but
I,
don't
think
there
are
any
rfcs
on
paced
chirping,
and
so
so
we,
you
should
be
careful
to
have
a
they
weren't
doing
it.
That
works
that
within
the
RFC
series
and
then
improve
on
that
with
a
separate
RFC
yeah.
A
I
want
to
get
I
agree
with
Wes.
I
would
look
to
see
this.
These
documents
published
so
I,
don't
want
to
gate
on
some
future
RFC
to
come
along
anything.
So
anyways.
M
J
B
M
J
B
Gonna
go
for
shoot
if
it
helps
get
us
moving.
David
Barack
has
an
individual
floor.
I
strongly
endorse.
This
should
hear
and
I
will
cheerlead
I
think
said:
I
can
that
people
really
really
really
should
do
this?
Make
we
can't
must
down
to
it
should
will
will
will
take
care
of
the
book
will
take
care
of
this
for
me
and
I
think
we
can
go
forward
with
that.
I'll
have
to
check
on
list,
but
they
should
would
just
take
this
from
right
off
the
table.
We
can
focus
on
the
hard
problems
Richard.
W
B
A
W
X
S
As
we
know,
the
implementations
are
changing
this,
but
there
we
mentioned
something
earlier
to
me
that
I
realize
this
is
probably
worth
just
bringing
up
briefly
lost
addiction
time
units
at
the
endpoints
is
usually
implemented
as
a
fraction
of
the
round-trip
time
in
the
network.
However,
the
switch
doesn't
necessarily
know
your
flows
round-trip
time,
which
means
that
whatever
resilience
we
are
telling
them
we're
building
is
not
really
visible
to
them.
I
did
the
degree
of
freedom
that
we
believe
this
might
provide
them
might
not
translate
into
the
degree
of
freedom.
You
want
discussion.
J
S
Yes,
it
can't
cut
it.
That's
what
you're
saying
so
I
guess
where
we
are
going
here
is
what
what
I
want
to
suggest
was
definitely
I
should
if
you're
gonna
have
this
thing,
which
should
be
I,
should
increasingly
feeling
strongly
about
that
separately.
I,
wonder
if
this
should
be
amended
to
actually
talk
about
endpoints
being
resilient
to
reordering
in
the
network
beyond
the
three
packet
limit.
S
Y
S
Richard
said
sorry:
it's
the
moment
what
we
just
said
is
appropriate
that
we,
there
are
n
points
that
are
doing
packet
based
you're,
doing
adaptive
packet
based
thresholding,
then
that's
also,
that's
also
good
enough.
Basically,
what
you
want
is
the
endpoint
to
not
fall
over
if
there's
more
reordering
in
the
network
and
I
wonder
if
there's
a
better
way
of
saying
that,
rather
than
saying
loss,
addiction
and
time
units.
U
Yeah
I
would
be
supportive
of
a
shirt
here.
I
think
it's
less
clear
if
the
goal
is
actually
to
have
the
endpoint
do
loss,
detection
and
time
or
just
be
more
reordering
resilient,
so
I
think
calling
that
out
will
be
useful.
Second
thing:
I
would
like
to
say
the
current
transport
specs
are
not
enforcing
the
dynamic
reorder
requirement,
so
it
might
be
static
right.
So
unless
the
transport
specs
in
TCP
and
quick
to
say
that
the
reordering
should
be
dynamic
either
either
in
packets
or
in
time.
A
So
I
want
to
place
myself
in
the
line
so
no
way,
I
don't
know
where
to
place
myself.
Maybe
I'll
place
myself
on
the
chest.
Oh
so
here
I
am
and
as
a
chair
I
find
it
difficult
for
us
to
relax
the
reordering
constraint
in
the
internet
layer
unless
we
write
a
specific
draft
that
does
that
and
I
tried
to
make
comments
earlier
to
make
sure
that
the
text
here
talks
about
being
resilient
to
a
network
that
might
do
this.
I'm
not
motivating
that
particular
change
in
this
document.
A
If
we
get
the
transports
to
all
and
be
resilient
to
reordering
a
more
and
more
resilient,
then
that
document
could
appear
here
and
we
could
discuss
it,
but
we
should
have
put
the
cart
before
the
horse,
because
otherwise
we
would
create
a
document
that
really
will
have
to
get
ITF
consensus
in
a
difficult
way.
So,
if
we're
heading
in
that
way,
I
think
I
would
have
finished
my
comments.
So
I
will
read
the
new
text.
You
already
have
read
it.
I
know.
A
J
B
J
B
A
M
A
Good
and
we
should
spend
some
time
on
our
remaining
documents.
We
have
one
two
three
four
five
individual
presentations
in
30
minutes
and
we
will
seek
to
do
what
we
can
and
but
if
the
author's
help
us
that
would
help
a
lot,
I
think
we'll
take
Jerome,
Jerome's
qci
thing
very
briefly
and
as
a
point
of
information,
so
we
know
where
we're
up
to
so
well.
Okay,
but
I
was
hoping
to
give
the
rest
of
okay.
Fine,
okay,
we'll
put
a
cm
cut
a
CA.
If
you
want
yeah,
that's
fine!
A
J
R
R
Wanted
to
spend
a
bit
more
time
on
these
early
slice
since
I
think
a
lot
of
the
information
was
also
presented
at
Montreal
but
seem
to
get
her
lost
when
it
filtered
through
to
the
meaning
list.
So
I
saw
was
the
nights
I
view
some
congestion
experienced
honestly
368
left
a
spare
code
point
in
the
ESPN
field,
at
the
low
and
of
the
tos
bite,
and
one
of
its
suggested
uses
back
in
2001
was
an
indication
of
mild
congestion
was
a
see
II
could
point
showing
severe
congestion.
R
R
One
such
detail
is
in
how
to
get
the
AC
feedback
to
the
server
where
to
the
sender
where
it's
acted
upon,
we
propose
using
the
a
single
wizard
bit
on
the
TCP
header,
as
shown
here
originally
assigning
that
bit
obviously
assigning
that
which
is
a
TCM
TC
p.m.
working
group
matter
and
rod
will
talk
there
about
it
tomorrow.
R
The
crucial
aspect
of
SCE
is
that
we've
taken
pains
to
preserve
all
the
existing
functionality
of
our
see
three
one:
six,
eight
ecn
within
SCE.
This
music,
easy
and
sen
points
and
middleboxes
are
completely
interoperable
with
each
other
by
design.
This
makes
incremental
deployment
considerably
easier.
What
this
achieves
is
an
end
to
the
perennial
trade-off
between
link
utilization
and
queuing
delay.
The
basic
the
basic
course
of
that
trade-off
is
the
traditional
congestion
control
sawtooth.
If
the
peaks
of
the
sawtooth
are
kept
at
the
low
delay,
the
troughs
will
be
somewhat
below
link
capacity.
R
R
These
have
been
many
attempts
at
this
over
the
years
and
the
only
protocol
that
has
stood
the
test
of
and
implementation
is
ecn
published
in
2001
almost
20
years
ago,
and
just
seeing
significant
practical
deployment
in
the
past
few
years.
Those
of
you
working
with
ipv6
will
know
that
June
before
is
the
end.
R
R
However,
it
is
an
imprecise
signal,
of
course,
old
endpoints
and
Datagram
transports
can't
understand
the
C
II
signal,
so
the
sender
informs
the
network
that
it
understands
the
aqm
signals
by
using
some
code
point
other
than
not
ECT,
usually
ECT
0s
seeing
adds
an
additional
channel
of
information
from
the
network
to
the
receiver,
where
a
drop
or
a
seee
mark
requests
a
large
reduction
in
centrate
SCE
requests.
A
small
reduction
does
receiver
reflects
SCE
precisely,
but
without
an
explicit
reliability
guarantee
using
the
ESEE
bit.
R
Okay
next
slide,
please
a
crucial
feature
of
this
approach
is
that
there
is
no
ambiguity
at
the
receiver
or
the
sender
in
whether
the
network
is
requesting
a
large
reduction
or
a
small
one.
Existing
a
gate
QMS,
especially
the
most
widely
deployed
seee
marking
nqm,
which
is
caudal
x,
plex
CDC
e,
to
be
interpreted
as
requesting
a
large
reduction.
R
E
Joe
Stewart
Josh
from
Apple
just
a
clarification
about
that
graph
where
you
say
sojourn
time.
Do
you
mean
the
instantaneous
sojourn
time
for
a
particular
packet
or
the
minimum
over
the
last
100
millisecond
window?
It's
instantaneous,
okay,
I
think
it
would
be
good
to
clarify
that,
because
everything
else
Cottle
does
is
the
minimum
over
the
window.
So
if
you
mean
instantaneous
I
think
you
need
to
call
that
out
as
an
exception.
Okay,.
R
A
R
E
Joe
Richard
asked
a
question,
but
I
think
the
answer
was
wrong
in
that
area
above
the
target,
where
it
says
CE
Marking,
you
can't
have
a
hundred-percent
SCE,
because
you
can't
have
both.
You
can
either
mark
a
packet
s
CA
or
you
can
mark
it
CA,
but
you
got
to
pick
one
or
the
other.
So
if
it's
CE
Marking,
that
means
it's
not
SC.
You
can't
have
both
right
so.
L
You
could
I
mean,
can
you
iterate
a
little
bit
what
you
mean
on
backward
competitive
because,
like
my
understanding
is
that
if
you
have
a
non
seee
aware
and
non
a
non
SC
where
an
SEO
airflow
competing
in
the
same
ICQ,
that
you
would
see
a
very
different
behavior
than
you
know?
What
else
you
see
you
I
see
you
right
so
like
I'm,
not
sure
if
that's
like
fully
backward-compatible
or
like
what
do
you
actually
mean?
Okay,.
R
I,
do
actually
have
some
prepared
answer
for
that,
so
we're
talking
about
SC
fairness
to
non
SE
flows
in
a
single
Q
and
vice
versa.
The
easiest
solution
is
to
leave
single
Q
nodes
without
s
CD
marking
capability.
This
takes
advantage
of
SCS
backwards,
compatibility
in
terms
of
dropped
and
I
of
Sisu
on
6/8
marks.
The
C
II
response
is
not
materially
affected
by
SCE,
and
this
provides
a
robust
solution.
R
Alternatively,
the
upgrade
of
a
note
of
support,
SCE
Marking,
should
be
taken
as
a
golden
opportunity
to
install
some
FQ
capability
at
the
most
basic
level.
This
could
be
CN
Q,
which
I'll
describe
later,
but,
of
course,
a
true
F
Q
algorithm
would
prefer
fairness
better.
Some
networks
may
choose
to
employ
diffserv
to
isolate
an
SE,
enhanced
application,
I.
A
Have
a
is
there
for
question?
Okay,
so
I
have
a
question
to
you
and
you
have
three
minutes
remaining
to
give
the
other
presenters
at
least
a
few
minutes.
Each
and
I
would
also
like
to
use
the
sense
of
the
room
to
see
how
many
people
have
read
this
and
have
some
thoughts
on
it.
So
is
there
any
way
you
would
like
to
end
your
talk?
R
R
Number
five
is
the
one
where
we
don't
meet
because
we've
actually
worse
than
cubic
in
a
single
new
competition,
but
that's
something
we
can
work
on
next
slide
sailing
nectar,
factional
congestion
window.
We
can
use
pacing
scaling
to
demonstrate
that
and
in
fact
we
had
already
done
that
at
Montreal
and
as
far
as
we
ordering
tolerance
on
time
basis.
Well
we're
using
Linux
and
Linux
supports
back.
Okay
next
slide,
please
and
there's
a
brief
summary
of
our
progress
since
Montreal,
which
I
think
I'll
skip
over
today,
except
to
say
that
we've
begun
an
evaluation.
R
A
R
Which
is
where
this
next
steps
that
one
yeah
okay,
so
we
are
requesting
working
group
adoption
of
the
SCE
draft,
we
believe
SCE
as
a
useful
exponent
that
is
already
safe
to
deploy
in
the
general
Internet
we're
happy
to
back
that
of
assertion
with
data.
Some
comments
have
been
received
and
noted.
We
will
continue
our
RFC
fibers
fees,
evaluation.
There
are
some
bugs,
you
know
annotations
to
track
down.
We
want
to
be
search
methods
bah-bah-bah.
J
Yeah,
first
of
all,
I
just
wanted
to
reinterpret
what
what
you
said
about
SC
in
a
single
queue.
Essentially,
you
said
SCE
is
starved
in
a
single
queue
against
Kubik,
and
so
the
fact
that
it
doesn't
work
isn't
build
an
opportunity
for
the
operator
to
0fq
that
that's
a
precis
of
what
you
just
said
just
for
the
room
that
may
not
have
followed
their
very
long
sentence
and
they
don't
want
to
do
fq.
It's
not
a
golden
opportunity.
Well,.
R
J
R
A
S
Jana,
lying
or
well
Asus
very
quickly.
My
understanding
is
that
the
goal
of
this
is
ultimately
to
get
low,
latency
transport
and,
if
that's
the
case,
I
want
to
put
that
as
the
first
requirement
and
backward
compatibility
as
as
a
secondary
requirement.
Perhaps
that's
my
personal
opinion.
My
question
in
terms
of
understanding
this
is
something
that
came
out
of
the
discussion
you
were
just
having
in
a
backward
compatible
environment
in
the
in
the
world
that
we
are
in
right
now.
R
R
R
S
R
Z
Let
me
ask
sort
of
a
clarifying
framing
question
in
terms
of
your
first
point
on
this
slide
so
requesting
working
group
adoption
I
think
if
we
end
up
with
a
working
group
adoption
call
and
this
work
adopted
by
the
working
group,
we
will
have
two
proposals
for
work
in
the
working
group
that
have
similar
goals
toward
a
similar
set
of
requirements
that
the
working
group
is
adopted,
but
with
different
to
wildly
different
deployment
stories
is
that
is
that
an
accurate
statement
like
in
terms
of
not
even
wildly
different
deployments
or
is
like
wildly
different
deployment
targets
like
you
know
what
you
get.
Z
What
you
get
in
partial
deployment
is
different
with
these
two
proposals,
I
would
suggest
to
the
working
group
that
a
way
for
there
is
to
make
the
deployment
part
of
that
story.
An
explicit
part
of
any
working
group
deliverable
that
we
would
take
on
this
I,
don't
know
if
we're
actually
going
to
do
the
call
for
adoption.
It's
just
that
when
I.
A
Z
A
A
A
A
20
people,
maybe
25
okay
and
who
is
planning
to
use
the
specs
in
here
or
implement
them
or
do
experiments
with
them
beyond
the
team.
That's
developing
them,
obviously,
because
I'm,
the
the
authors
of
are
presumably
going
to
progress
the
wrong
way,
but
who
else
is
interested
or
actually
targeting
experimenting
with
this
of
the
moment?
Other
people.
A
Somebody
will
be
one
two,
three,
four
five
in
the
room.
Okay,
thank
you.
Is
there
anything
else
you
want
to
ask
the
chairs
I
think
we
have
some
information
there.
We
won't
be
running
the
adoption,
call
this
meeting
and
we
can
talk
more
with
you
offline
about
what
might
be
required
by
the
working
group
to
have
that
adoption
call,
certainly
and
I
think
we
should
cut
to
allow
the
other
speakers
at
least
some
time.
Okay.
Thank
you.
Thank
you
very
much.
A
Y
With
us
from
perspective,
labs
I
have
a
quick
question
and
the
process
for
the
QC
I
jobs.
Yes,
I
think
we
had
some
resolutions
in
the
last
meeting.
Yes
and
now
I
see
it's
informational,
okay,
that's
good,
but
what
would
be
the
process
from
the
chairs
perspective?
That
will
be?
Is
it
an
ed
upon
cert
or
is
it
a
working
group
will
be
item?
It
is.
A
Not
a
working
group
item
okay
moment
and
we
will
have
to
discuss-
and
we
don't
have
been
put
to
discuss
that,
and
we
will
also
check
with
other
people
about
their
interest
in
this,
including
3gpp
here
is
impacted,
so
we
will
not
make
that
adoption
call
until
we
have
consulted
with
people,
but
we're
encouraging
people
to
read
and
work
on
the
on
the
document
to
make
it
better.
Okay,
thank
you.
A
AA
AA
Try
to
be
really
quick,
Fabio
karela
from
Telecom
Italia,
and
we
developed
some
solutions
to
improve
the
measurements
that
is
done
using
the
spin
bit.
I'll
be
really
quick
in
this
part.
All
of
you
know,
I
think
there's
been
bit
working
mechanisms.
We
added
the
additional
bit
that
is
used
to
try
to
solve
these
three
main
problems
that
can
be
found
on
the
spin
bit
mechanisms.
AA
The
first
one
is
the
packet
loss
that
tends
to
it
cause
the
wrong
estimate
around
trip
time
and
the
second
one
is
the
reordering
of
spin
edges
that
caused
the
creation
of
multiple
spurious
edges,
and
so
the
multiple
observation
of
round
trip
time
per
spin
bit
period,
and
the
third
problem
is
the
introduction
of
alts
in
the
traffic
flow
done
by
application.
Limited
sender
that,
in
this
case,
can
cause
some
delay
in
the
edge
reflection
and
so
the
wrong
estimation
of
the
round-trip
time
with
the
additional
delay
bit
we
solve.
AA
This
problem
is
a
single
pocket
that
bounced
between
transit
service,
so
the
measurement
is
done
using
the
time
of
the
of
two
consecutive
delay
sample.
Okay,
okay
I'll
go
directly
to
the
second
mechanism.
That
is
the
one
where
we
went
where
we
want
to
focus:
okay,
okay,
we
define
also
a
second
measurement
that
it's
possible
to
achieve,
using
the
spin
beat
in
one
implementation
or
it's
possible
to
implement
using
two
bits.
AA
The
mechanisms
is
really
simple:
we
mark
a
train
of
pocket
real
parts
of
production,
packets,
these
packets,
bounce
between
clients
and
servers,
and
complete
two
rounds
so
on
observer,
plays
on
whatever
direction,
can
count
and
compare
the
packet
seen
in
the
first
round
and
in
the
second
round
and
compute,
statistically
measurement
of
the
loss
rate
of
the
connection.
Diminution
in
this
case
is
that
usually
uplink
and
downlink
have
different
parking
rates,
but
this
can
be
solved
by
by
generating
pocket
that
are
the
slowest
traffic
direction.
So
using
a
token
system
we
can
achieve.
A
I
suggest
you
park
there
and
just
go
to
the
end,
go
to
the
your
very
last
slide
and
in
theory,
because
yeah
and
while
you're
doing
while
you're
thinking
about
what
you'd
like
to
say
is
your
end.
I'll
just
do
a
brief
update
from
the
chairs
on
this
particular
topic.
We
believe
that
mechanisms
to
do
this
could
be
talked
about
in
this
working
group.
So
the
this
one
is
a
mechanism,
that's
being
worked
on
by
people
who
are
coming
on
to
this
meeting.
That's
great.
A
We
also
believe
that
the
mechanisms
when
they're
instantiation
protocols
will
be
developed
within
the
working
groups
that
they
target.
So
in
future,
this
work
will
probably
be
adopted
in
a
different
working
group
unless
it
just
becomes
a
generic
mechanism.
So,
if
you're
thinking
about-
why
is
this
here?
What
its
future
is?
That's
the
scope!
You
can
have
your
finals
over
the
last
couple
of
minutes
of
a
meeting.
Oh.
AA
So
without
any
spin
bit
implication-
and
other
things
never
say
is
that
in
this
working
loop
there's
another
draft
that
is
talking
about
that
talks
about
loss,
and
this
is
the
one
that
is
mentioned
in
the
in
the
other
of
these
lights,
and
we
did
also
a
slides
where
we
compared
the
two
mechanisms.
So
it
could
be
interesting
to
have
a
discussion
about
these
two
mechanisms
and
try
to
find
a
direct
place
where
we
can
discuss
these.
AA
This
sort
of
mechanisms
also,
because
I
think
they
are
really
useful
to
understand
where
the
laws
are
located
because
with,
for
example,
with
our
mechanisms
we
can
understand
if
the
loss
is
before
the
observer
after
the
observer.
And
if
we
use
two
observers,
we
can
also
compute
them
the
loss
that
is
between
the
two
observers.
So
this
could
be
a
good
way
to
measure
the
loss
inside
the
domain
of
the
network
operators.
So
I
think
it's
thank.
AB
Explanation
you
want
to
take
both
short
comments.
Go
Eric
Kinnear,
so
I
will
skip
past
a
lot
of
the
other
discussion
that
we've
been
having
elsewhere
anyway,
but
the
two
things
I
wanted
to
say
are
first
I.
Thank
you
for
doing
this.
I
think
these
are
interesting
measurements
to
be
able
to
gain,
and,
secondly,
I
would
be
much
more
interested
in
a
generic
way
to
do
this,
rather
than
trying
to
push
this
into
each
and
every
protocol
that
it
needs
to
do
so.
Z
Comment:
Brian,
Brian,
Trammell
co-chair
IPP
M
up
here
to
agree
within
contradict
Eric
at
the
same
time.
I
think
that
there's
a
there's
a
question
as
to
like,
so
we
have
a
history
of
looking
at
generic
mechanisms
for
these
sorts
of
things
in
AI
ppm
like
so.
The
IOM
work
is
exactly
the
same
sort
of
thing.
A
I
think
bike,
just
just
the
illustrate
how
hard
this
is.
I
ppm
is
the
right
place
to
decide
what
the
metrics
are,
because
the
expertise
lies
there.
So
you
probably
have
multiple
pieces
to
put
together.
We
can
do
mechanisms,
but
we
would
then
D
die
ppm
to
talk
about
measurements
and
there
you
would
need
to
influence
protocols,
none
of
which
should
deter
you
from
doing
this
work.
Please
talk
to
other
people
and
find.
AC
A
J
Yeah
I
just
wanted
to
point
to
some
quite
old
work
now
done
by
the
ITF
called
congestion
exposure
which
had
that
lost
bit
in
a
congestion
bit
that
the
sender
put
back
into
the
network
from
its
feedback,
so
that
anywhere
in
the
network.
You
could
tell
how
much
loss
was
upstream
of
that
point
and
downstream
of
that
point
from
any
in
any
measurement
point
in
the
network
thing
that,
and
that
was
at
the
IP
layer
for
generic
protocols.
Thank.
A
Support
it
Oh
going
up.
Okay,
thank
you,
everybody
for
coming
here
and
a
chairs
note
that
there
was
one
person
on
Java
and
said
they
we're
willing
to
work
upon
SC.
Apart
from
the
people
in
the
room,
we
will
record
that
and
we
will
be
meeting
again
at
the
next
IETF
we
plan
to
do
a
lot
of
work
on
the
mailing
list.
Please
use
them.
Please
be
constructive
on
the
mailing
list
to
address
technical
comments.