►
From YouTube: IETF93-AQM-20150720-1850
Description
AQM meeting session at IETF93
2015/07/20 1850
A
So
this
is
the
this:
is
the
AKM
Wrecking
Crew
breathing?
If
you're
still
here
from
the
last
meeting,
please
leave
the
room.
We
have
a
pretty
tight
schedule,
so
I
would
like
to
start
right
away.
They
have
already
chaves
crab.
We
have
a
note
taker,
thank
you,
David
Thank,
You,
collina
or
the
other
way
run.
A
A
Going
once
going
twice
and
sold
so
we
go
with
the
agenda
as
it
is
published.
First
I'll
have
a
short
summary
on
the
status
of
the
working
group.
Then
we
will
have
three
quick
updates
on
the
status
of
the
three
drafts
of
the
evaluation
guidelines,
ecn
benefits
and
the
a
queuing
cuddle.
After
that
we
have
a
presentation
by
pop
briscoe,
which
came
a
belief
out
of
the
pie
review
as
part
of
it.
A
Then
we
have
a
live
demo
with
called
a
qm
biker
and
after
that,
we'll
have
evolved
I'm
presenting
his
GSP
aqm
again,
which
recently
received
the
best
paper
award
at
a
sitcom
as
far
as
I've
understood
and
if
it's
Bob
will
go
further
into
the
in
this
review
of
the
pie.
Draft
I.
Don't
think
that
we
have
anything
for
the
for
the
last
item
actually
major
milestones
that
we've
achieved
with
passed
our
first
rc
so
RC
75-67
is
the
long-discussed
aqm
recommendations,
Thank
You,
gori,
Thank,
You
Fred
and
the
other.
A
Thank
you
other
working
group
items
that
are,
we
are
basically
working
on,
so
we
have
one
actually
two
drafts
that
are
currently
in
working
group.
Last
call.
We
have
ecn
benefits.
This
spin
has
been
in
a
blast
call
already,
with
corey
receiving
quite
some
feedback
during
that
round,
I
believe,
but
you
can
comment
on
this,
that
all
the
feedback
was
addressed.
B
B
Basically,
I
just
summarized
here
with
the
different
steps
from
the
second
version
to
the
third
version.
We
have
included,
go
his
comments
and
basically
we
have
a
more
accurate
terminology
for
all
the
most
made
sure
that
was
not
really
good
before
we
have
a
dealer
scenario
on
when
you
have
atms
one
after
the
other,
and
we
also
have
in
addi
that
it
was
not
clear
in
the
draft
what
test
was
actually
needed,
what
was
requirement
for
each
test?
So
we
have
added
a
table
that
summarizes
all
the
tests
and
the
requirement
for
each
test.
B
Then
we
have
assessed
Jim's
comments,
basically
the
main
here
I.
Just
summarize,
the
main
two
points
that
were
raised
is
that
basically
Jim
wanted
us
to
have
running
code,
but
that
when
somehow
in
conflicts
with
the
fact
that
we
want
the
guidelines
to
not
to
be
very
dependent
on
the
platform
used
to
test
the
algorithms,
so
that
can
hardly
be
possible
and
also
we
did
not
pick
much
about
packet
scheduling.
B
We
have
extended
this
section
a
little
bit,
but
the
problem
is
that
packet
scheduling
is
something
that
was
not
in
the
mood,
but
we
give
me
more
at
the
beginning:
just
refer
to
a
qn
that
is
dropping
or
marking
schemes
and
actually
to
scheduling
in
particular.
So
we
believe
that
these
guidelines
may
be
completed
with
another
document
that
access
as
a
specific
case
of
scheduling
and
all
the
scenarios
that
goes
with
it,
like
flows,
dilation
or
all
these
kind
of
things,
and
if
fetch
document
appears
to
be
region
someday.
B
This
document
will
obviously
consider
all
the
tests
that
we
have
set
up,
but
just
add
some
more
scenarios
and
then
form
the
fourth
version
to
the
next
version.
The
vision
that
we
have
today,
we
just
had
a
couple
of
English
needs
and
also
we
have
updated
way
to
present
the
matrix
with
ellipses
like
it
is
done
in
case
Winston
style.
B
C
C
My
first
time
in
a
pink
square
at
this
idea,
hi
I'm,
Gary,
first
Mike
or
author,
is
Michael
veldt
cyl,
and
this
is
about
the
benefits
of
using
ecn
or
the
easy
and
manifest
wafts.
We
have
called
it,
and
this
document
has
just
been
quickly
revved
in
the
last
very
short
period
to
include
a
few
typos
and
things
but-
and
it
basically
was
a
result
of
a
long
set
of
discussions
on
the
mailing
list
about
the
detailed
wording
of
various
sections
you
can
see
on
the
list.
C
A
D
D
C
Editor
Maricopa:
we
are
not
ready
to
ship
that
quite
yet,
but
all
going
well
by
next
by
the
next
ATF
meeting
and
ideally
within
a
few
weeks,
we
should
be
so
I
think
deciding
to
shut
this
other
than
though
it
will
take
a
normal
reference
on
another
document
that
will
delay
it
in
queue.
I,
don't
see
a
problem
yeah.
A
E
E
Alright,
if
this
Richard
said
this
was
from
me
having
reviewed
pie,
it
was
actually
partly
that
and
partly
from
the
work
Kearney's
going
to
present
next
Kurdish
Shepherd
is
going
to
present
next
and
all
the
experiments.
We
were
doing
on
that
and
you
can
think
of
this
as
either
a
comment
on
the
original
goals
of
cuddle
and
pie,
or
you
can
think
of
it
as
a
preparation
for
why,
where
some
of
the
motivation
for
why
we
designed
the
dual
q
coupled
algorithm,
that
is
going
to
talk
about
next,
also
I
should
point
out.
E
First
off
any
aqm
will
do
this
is
really
about.
When
I
was
working
in
BTW,
it
was
an
it
was,
which
one
is
best.
You
know
what
watch
should
we
avoid
if
we
want
to
make
sure
we
simplify
things
and
use
this
instead
of
disservice
a
and
have
more
low
latency
traffic
and
not
have
so
many
classes
and
all
that
sort
of
thing?
E
So
this
is
more
about
making
sure
it's
it's
correct
at
the
boundaries
of
load
and
things
like
that
rather
than
worrying
too
much
about
you
know
where
any
aqm
will
do
is
is
the
first
message.
So
when
we
were
looking
at
cuddle
and
and
pie,
we
ran
two
experiments.
Well,
we
were
just
loading
up
with
more
long-running
flows
and
we
came.
We
came
across
very
high
lost
levels
and
we
sort
of
get
bit
concerned
that
everyone's
been
focusing
on
delay
and
they've
not
been
looking
at
loss
and.
E
E
Pi
gave
similar
results,
not
quite
as
bad
because
it
was
not
giving
quite
as
much
delay
so
so
this
talk
is
about
what
we
did
to
work
out,
why
that
was
happening
and
what
you
might
want
to
do
as
a
consequence
and
that
so
the
top
level
summary
is
that
you
can.
You
cannot
get
TCP
to
give
you
low
delay.
If
you
want
low
loss
as
well
and
higher
utilization
TTP's
like
a
balloon,
when
you
squeeze
it
one
way,
it
will
just
come
out
another.
E
However,
you
don't
necessarily
want
to
have
the
sort
of
lost
levels
like
four
and
a
half
percent,
because
that
gives
you
more
knock
on
delay
than
the
queuing
delay
would
have
given
you,
because
you
know
all
the
time
outs
and
and
tell
losses-
and
things
like
that.
So
this
is
sort
of
saying.
Maybe
you
want
to
relax
those
configuration
parameters
a
bit
and
not
quite
go
to
five
milliseconds
so
anyway,
but
next
slide.
E
The
insight
for
this
came
from
another
algorithm
algorithm.
We
wrote
and
in
a
way
I'm
sort
of
saying
it's
a
better
algorithm.
It's
certainly
simpler.
It's
a
at
least
as
good
and
we're
calling
it
curvy
red
it's
sort
of
like
red,
except
it's
curvy.
It's
it's
just
the
just
as
polynomial.
You
know
either
a
quadratic
or
a
cube
or
aquatic
curve.
So
it's
a
bit
like
the
red
flat
and
then
up
a
bit
and
then
up
a
bit
further.
E
But
it's
smooth
yeah
and
the
reason
why
that
became
interesting
to
us
is
because
we
worked
out
a
really
easy
way
to
implement
it.
If
you
want
a
square
curve,
red
is
a
linear
curve.
Where
you
take,
you
throw
a
random
number
and
you
check
whether
the
q
is
greater
than
that.
We
just
throw
two
random
numbers
to
get
a
square
for
random
numbers
to
get
a
quartic
curve
and
so
on.
E
But
I'll
come
on
to
that,
and
the
nice
thing
about
this
is
that,
as
as
the
load
increases,
if
you
double
double
load,
you
don't
sort
of
get
two
times
more
q,
because
pressure
increases
the
further.
You
go
up
the
curve,
so
you
don't
get
more
and
more
delay,
but
you
do
get
some
more
delay,
but
it
means
that
you
don't
get
as
much
loss
as
you
did
before
and
you'll
see
that
in
the
next
slide.
E
So
it's
essentially
drop
probability
equals
D
means
delay,
queuing,
delay,
repair
of
some
you
parameter.
U
is
curviness.
With
the
second
letter,
reminding
you
curviness
and
the
bottom
capital
d
capitals
tend
to
be
I.
Use
them
for
Constance
is
the
just
the
point
at
which
it
hits
the
under
percent
net
gives
you
the
the
slope.
E
It's
quite
short.
It's
one
line,
two
lines.
If
you
include
the
drop,
it's
just
bit
shift
the
queue
in
time.
If
it's
greater
than
the
max
of
two
random
values
drop
the
packet
and
the
previous
line
in
the
code
is
smooth
the
key,
so
the
value
of
DQ
there
is
actually
ewr
made
like
like
red.
Does
so
it's
pretty
simple.
E
So
then,
this
slide
I
want
to
dwell
on
a
bit
and
I'm
sort
of
want
to
wave
my
arms
at
it,
so
I
might
have
to
get
out
of
the
pink
box
or
I.
Suppose
I
could
use
one
of
those
laser
pointers
which
I
have
in
my
bag.
We'll
see
how
we
go
so
essentially
what
this
says
is-
and
let
me
explain
the
graph
first.
Essentially,
you
can
think
of
the
horizontal
axis
as
more
flows.
E
It's
normalized
load,
which
means
I've
done
it
so
that
it's
it's
applicable
whether
you
have
a
large
amount
of
capacity
so
10
times
as
much
capacity.
You
would
be
able
to
have
10
times
as
many
flowers.
Ok,
thank
you.
I
am
I.
Guess
that's
yep,
ok,
so
normalized
load
proportional
to
the
number
of
flows,
/
the
capacity
all
right,
and
so
you
can
think
of
that
as
the
pressure.
E
That's
being
put
on
the
on
the
link,
the
more
flows
you
put
into
it
and
then,
if
you
think
of
at
any
point,
the
vertical
line
for
that
normalized
load
gives
you
for
a
particular
value
of
curviness.
It
will
give
you
if
you
take
curviness
one.
It
will
give
u
DQ
on
this
axis
here
somewhere,
say:
you'll
be
there
and
it
will
give
you
drop
probability
there.
E
E
Okay,
oops
did
that
move
the
slider?
E
F
E
And
the
sort
of
final
conclusion
is
that
ecn
gets
you
out
of
this
dilemma,
so
you
can
have
flat
delay.
But
it's
sort
of
saying
it's
an
argument
for
saying:
ecn
and
drop
shouldn't.
Be
this
considered
to
be
the
same
thing
because
you
can't
get
out
of
this
dilemma
with
drop.
So
you
need
a
different
algorithm.
G
C
E
E
I
was
hoping
to
avoid
that
point.
The
units
the
units
are
dimensionless
because
L
is
proportional
to
n
RX,
but
then
there's
there's
the
MSS
in
there
and
there's
a
round
trip
time
in
there
and
you
can
go
and
read
the
paper
if
you
want
to
find
that
out,
but
I
thought
I,
try
and
simplify
it.
For
this
talk
by
just
saying
it's
a
dimension
number
that
that
you
can
actually
the
best
way
to
to
think
of
it.
E
Is
that
because
the
more
flows
you've
got
in
the
link,
if
you
double
the
amount
of
flows,
each
will
get
half
the
amount
of
capacity.
Yeah
flow
rate,
sorry,
and
so
what
I've
shown
here
is
for
the
particular
normalized
low
value
we've
got
here.
You'll
get
one
megabit
per
second
per
flow
about
there,
you'll
get
half
a
megabit
where
to
therefore
there.
So
it's
sort
of
as
you
as
you
double
the
number
of
flows.
You'll
get
a
half
the
capacity
yeah
pop.
E
So
I
think
we're
nearly
done
were
just
got
one
more
slide,
I
think
yeah.
So
the
the
conclusion
I'll
just
repeat
is
that
if
you
squeeze
delay,
you'll
get
more
loss
and
so
really
whatever
we
do
with
like
the
problem
is
now
you
know,
once
you
have
any
any
sort
of
a
QM.
Your
problem
is
then
TTP
once
you've
got
a
reasonable,
like
you
can
all
argue
about.
Oh
my
delay
is
better
than
here's
or
my
losses.
Villain
is,
but
actually
it's
TCP.
That's.
E
You
cannot
get
out
of
these
dilemmas
and
so
look
very
carefully
at
people's
claims,
because
they're,
probably
just
using
different
points
on
earth
on
a
surface,
and
we
already
know
we've
got
this
delay
utilization
trade-off,
and
actually
you
know,
strangely,
I'm
going
to
have
to
say
easy
ends.
The
solution
again.
E
You've
not
heard
that
me
say
that
one
before
the
solution
to
both
those
problems
is
that
ecn
in
the
first
case
allows
you
not
to
worry
about
what
level
of
drop
you've
got
because
it's
marking,
and
in
the
second
case
it
means
that
you
can
have
lower
delay
without
getting
low
utilization,
because
you're
sorting
can
be
smaller,
you
can
have
more
teeth,
so
you
can
and
again
that
means
you
can
have
more
ECM
marks.
But
if
you
had
more
drops
you
would
it's
an
impairment,
so
you
don't
want
that.
E
So
I
think
that
is
that
that
is
the
main
set
of
conclusions
and
I
guess
the
other
one
is,
but
we
now
have
an
ache
that
allows
you
to
adjust
this
curviness.
So
you
can
trade
off
delay
against
loss,
whereas
I
think
pine
coddle
don't
give
you
that
choice.
They've
essentially
made
the
choice
for
you
that
you've
got
to
have
constant
delay
and
they
built
the
whole
algorithm
around
it
where's
you,
and
so
you
can't
soften
that
assumption
to
get
less
loss.
If
you
want
it
well,.
G
Matt
Memphis
the
thing
that
you're
blaming
on
TCP
is
actually
a
more
general
problem.
Congestion
control
is
an
equilibrium
process
and
for
whatever
parameters,
the
congestion
control
is
paying
attention
to
it,
finds
whatever
protocol
you're
talking
about
it,
finds
it
trades
them
off
against
each
other
to
find
an
operating
point
which
is
consistent
with
its
algorithm.
G
So
yeah,
one
of
the
things
that
has
come
to
be
understood
about
the
old
red
is
actually
the
smoothing
function
is
broken
and
and
I
was
wondering
if
you'd
actually
looked
at
that
or
what
smoothing
function
you
used,
I
mean
the
red
one
in
particular
is
not.
The
original
is
not
a
good,
not
the
right
function.
Yeah.
G
E
F
Stewart
chair
again,
I
just
want
to
quickly
echo
that
point,
because
that's
a
very
good
one
that
we
we
fall
into
this
trap
of
blaming
TCP
for
this
and
that,
when
what
we
really
mean
is
any
transport
protocol
would
have
to
have
these
properties
any
transport
protocol
that
seeks
to
find
out
right
that
seeks
to
find
out
how
much
throughput
is
available
by
trying
it
every
tries
too
hard
and
losers
and
backs
off.
So
it's
hard
to
imagine
a
protocol
that
wouldn't
have
that
same
property,
wait
until
you've
heard
kunst
all.
E
C
E
E
F
C
E
G
E
H
E
Yeah
I
guess
I,
wouldn't
mind
pie
or
coddle.
I
I
particularly
have
an
objection
to
fq,
but
everyone
knows
that
and
and
from
the
experience
we're
done,
we've
found
that
it
scalps
off
UDP
flows
like
for
your
video
and
things
like
that.
If
you've
got
TC
peas
in
there
as
well,
so
you
know
I,
don't
say
you
should
do
anything
because
I
don't
particularly
like
the
fq
side
avec,
you
cuddle,
but
otherwise
you
want
that.
I
This
is
some
work
that
we
have
done
together
with
the
intern.
Oh
yeah
embo,
it's
in
the
in
the
scope
of
the
fp7
right
project,
so
it's
partly
funded
I
have
to
say
that
so
we
have
we,
we
are
showing
a
proposal
for
a
new
aqm.
It's
called
the
double
q
couple
take
em.
We
have
draft
which
missed
the
deadline
by
minute
and
a
paper,
so
we
have
preprints
which
is
submitted
to
connect,
so
you
can
find
it
on
the
links
so.
I
So
what
we
were
looking
at
this
okay
today
in
internet
Bob
already
mentioned
there-
is
this
compromise
between
loss
and
latency.
I
mean
queuing
delayed
and
mainly
because,
if
you
don't
have
a
big
queue,
you
have
loss.
So
this
is
related
to
how
the
drop
was
introduced
in
the
original
because
it
was
made
with
drop.
So
there
is
a
kind
of
problem
with
latency
with
the
current
basic
piece.
So
we
saw
also
in
the
data
center
that
there
is
a
big
improvement
by
using
data
center
TCP.
I
So
it's
used
easy,
and
so
we
are
definitely
in
favor
for
ecn,
but
it
uses
TCN
in
a
much
better
way.
So
it
has
much
more
frequent
marking
because
it's
a
scalable
protocol.
It
means
that
if
the
true
put
goes
higher,
it
gets
the
same
marking
per
round
trip
time.
Whatever
the
bottleneck
rate
is
so
it
scales
up
to
very
high
throughput.
While
today
the
current
tcp
has
the
square
root
inside.
So
it's
getting
less
and
less
feedback.
I
I
Just
if
you
use
it
today,
it's
not
possible
because
if
you
try
to
use
it,
it's
only
one
line
in
Linux
that
you
can
switch
on.
You
get
continuously
fook
use
because
there's
a
center
piece
piece
too
aggressive
and
high
drop
rates,
so
the
classic
will
starve.
So
we
need
Adam
and
sorry,
oh
yeah,
so
classic
yeah.
Maybe
I
should
also
explain
what
we
call
data
center
tease,
be
we
see
it
as
a
new
family
of
congestion
controllers
which
achieves
low
gloss,
low,
latency
and
throughput
scalability.
So
these
features
are
are
in
data
center.
I
T
speeds,
just
an
example
that
other
congestion
controllers
can
also
have
this.
So
with
a
classic
TCP
like
cubic
and
Reno.
They
are
designed
for
loss
and
take
care
of
that
compromise.
Let
the
center
TCP
is
designed
for
ecn
and
doesn't
make
that
compromise,
so
you
can
have
all
together,
let's
say
so
coming
back
to.
How
can
we
have
an
AKM
so
and
this
duel
q
couple
take?
Em
is
the
first
proposal.
I
So
the
concepts
of
I'm
going
to
show
you
in
three
steps.
First
concept
is
to
use
a
dual
queue
and
that's
necessary
because
of
course,
this
l4s
traffic
DCP
kinds
family.
They
they
can
preserve
very
low
latency,
so
they
need
an
order.
Queue
to
have
this
low
latency
in
the
other
needs
a
big
clue,
because
otherwise,
if
it's
drop
based,
you
get
a
high-gloss
so
and
for
throughputs,
so
they
need
a
big
Q.
So
what
we
do
is
currently
in
our
implementation.
I
I
So
then
we
use
just
pure
stick
priority
to
schedule
because
we
want
low
latency
must
be
getting
the
low
latency,
so
we
use
the
priority
scheduler
because
and
then
of
course
like
this,
it
doesn't
work.
So
we
need
a
next
step
that
is
to
have
a
coupling
between
the
two
queues
and
a
coupling
is
based
on
the
queue
size,
because
this
Q
will
be
always
empty
because
it
gets
tricked
priority
and
this
Q
will
always
grow
if
this
one
is
not
empty.
H
I
Red
one,
I
am
sorry,
so
this
queue
size
is
the
the
classic
you
side.
So
this
will
grow
if
there
is
not
enough
emptiness
in
this
queue.
So
and
how
is
this
emptiness
created
if
this
cube
grows,
the
marking
probability
will
grow
and
we
put
more
marks
in
this
data
center
TCP
traffic.
Of
course,
we
also
have
to
control
our
own
cue
and
if
you
use
the
same
marking
probability
here
to
drop,
then
of
course
the
classic
TCP
will
starve
again.
I
So
how
can
we
make?
The
rate
is
the
same
by
inducing
a
drop,
which
is
the
square
of
the
probability,
and
this
square
of
the
probability
is
I'm
done
here
with
the
square
root,
so
the
rates
with
the
same
p
here
square
stand
here.
Not
squared
will
make
sure
that
the
rate
is
the
same.
So
it's
a
simple
thing.
We
have
to
think
twice
here.
That's
what's
called
think
twice.
I
If,
for
instance,
we
agreed
to
make
a
ten-percent
drop
probability,
sorry
mark
probability
on
data
center
TCP
we
do
ten
percent
of
ten
percent,
which
is
only
one
percent
of
drop,
and
if
this,
if
you
just
apply
it
like
that,
so
this
in
fact
becomes
kind
of
curvy
curvy
rates.
This
is
where
the
curve
red
comes
from.
I
Yes,
so
this
is
strict.
Priority
means
if
this
queue
is
not
empty,
it
gets
served,
and
this
one
gets
only
served
on
this
one's
okay,
because
that's
good,
because
if
you
do
run
throbbing
of
course,
then
each
class
would
get
the
same
throughput
and
if
there
are
10
flows
in
this
queue
and
one
flow
here,
this
one
would
get
a
half
of
the
capacity.
That's
not
what
we
want
by
making
its
strict
priority
and
do
the
control
loop,
the
coupling.
I
We
make
sure
that
if
there
is
one
flow
here
and
tense
logs
here,
this
one
gets
the
same
throughput
as
those
10
you
see
because
they
get
the
same
probability.
The
marking
probability
and
the
one
needs
a
square
to
undo
the
square
root,
and
this
one
just
can
be
applied
directly.
So
it
is
as
simple
as
that
to
make
data
center
disappearin
with
the
same
rate
as
DCP
classic
reno.
I
I
Yeah
there
are
some
details
in
our
implementation,
so
in
the
classic
you
know
versatile
islands.
In
these
things
we
introduce
the
smoothing.
It
helps
no
smoothing
at
all
in
here,
because
it
makes
it
worse,
so
data
center
TCP
forget
smoothing,
not
necessary.
It
makes
it
worse.
A
second
thing
is
also
that
if
there
is
no
classic
traffic,
of
course,
this
Q
also
needs
to
be
controlled
and
we
used
for
now
just
simple
step
function
like
data
center
TCP.
I
I
So
we
evaluated
this
on
a
real
broadband
tested.
I
did
explanation
previous
time
in
icc
or
tea
as
well,
so
we
did
static
or
a
long,
long
running
flows
and
compared
the
the
throughput
fairness.
So
we
went
further
now
we
also
did
dynamic
tests
to
see
whether
with
flow
started
and
stopped.
We
also
got
the
good
performance,
and
so
I'll
show
you
later
the
results
to
that
worked.
Also
so
and
that's
why
we
did
the
smoothing
and
so
on.
We
got
knots
or
more
experience
on
that.
So
this
test
bit.
I
What
is
it
that
are
a
number
of
service
routers?
It's
a
broadband
residential,
fixed
access
network
where
we
have
service,
routers
and
access
nodal,
with
real
copper
line,
residential
gateway,
a
data
center
TCP
server
or
PC
at
home,
and
a
cubic
PC
at
home,
which
can
talk
to
a
data
center
where
there
is
a
data
center,
TCP
server
and
a
cubic
server.
We
have
in
the
bng,
typically
peruse
the
shaping
we
put
it
out
in
a
linux
box,
we
put
a
shaper
of
40
megabit
and
we
could
check
with
a
lot
of
different
EQ
amps.
I
So
I'm
going
to
show
you
and
demonstrator,
so
I
will,
with
a
remote,
desktop
login
into
this
setup,
which
is
in
a
drop
and
we
have
a
Huey,
a
graphical
huey
which
shows
the
two
cues.
Well,
actually,
it's
the
two
classes.
This
is
ecn
the
queuing
delay
experienced
by
Sen.
This
is
the
queuing
delay
experienced
by
the
drop
flows.
I
So
after
classification-
and
this
is
the
the
drop
probability,
the
marking
probability
shows
some
link
utilization
and
then
here
we
can
vary
the
number
of
flows.
You
can
see
the
throughput
of
the
flows
and
we
can
also
have
some,
so
these
are
long
line
long
long
flow.
So
it's
a
big
file
copy.
These
are
web
browsing
traffic
where
either
hundreds
requests
per
second
or
10
requests
per
second.
So
that's
to
show
what's
what's
going
on
when
you
start
and
stop
Flux.
I
Have
to
turn
myself
as
well,
so
so
we
captured
a
traffic
at
this
middle
box.
This
is
a
QM
server,
and
so,
if
we
start
so
I
think
now
the
duo
queue
is
active.
We
can
start
a
few
normal
TCP
flows,
so
it's
the
classic
TCP
flows
cubic
and
takes
a
while
before
the
commands
are
executed.
So
what
we
see
is
we
have
a
curvy
sku
and
we
have
the
flows
competing
with
each
other,
and
you
see
a
big
variation
if
we
start
some
data
center
TCP
flows
as
well
yeah.
I
I
So
it's
really
because
this
Q
must
be
fifty
percent
of
the
time
empty
to
support
the
fair
share
between
those
two
and
when
it's
filled.
It's
only
slightly.
You
see
the
pixel
moving,
sometimes
so
it's
below
milliseconds
now
and
it's
other
percentage
is
very
low.
So,
no
matter
how
money
flows
we
stop
and
start
so
it
takes
a
while
to
stop
them
and
they
already
started.
In
fact,
you
see
at
the
end,
the
rate
is
approximately
the
same.
I
You
see
data
center,
TCP
behaving
very
well,
because
all
the
flows
are
close
together
and
they
they
stray
off
and
the
wobbling
because
it
gets
only
/
x
round
trip
times.
One
drop
and
one
can
stray
off
very
far
because
by
accidentally
God's
les
drops,
while
in
the
other
case
that
the
center
PCP
gets
/
on
3.2
marks.
So
that's
how
data
center
kissby
works
here?
If
we,
for
instance,
we
play
is
the
aqm.
I
You
see
starch
the
command
by
nephew
cuddle.
You
see
the
difference.
I've
activated
fq
cuddle.
Now
you
see
that
the
fair
share
is
perfect
in
this
case.
That's
because
we
have
a
scheduler,
round-robin,
scheduler,
a
purge
flow
and
the
floor.
If,
as
long
as
the
flow
identification
is
okay-
and
there
is
no
collision
of
kills
it's
perfect,
but
of
course
the
queuing
delay
is
for
classic
flows
very
short,
so
with
a
bigger
drop.
I
But
of
course
here
we
need
to
know
the
flow
identification
with
our
mechanism,
and
what
we
want,
of
course,
is
that
everybody
will
migrate
to
data
center
tcp.
We
don't
need
flow
identification.
We
have
a
really
small
Q
and
we
also
have
good
enough
fairness,
let's
say
maybe
to
show
if
we
don't
have
any
other
any
flows.
Of
course,
then
the
queuing
delay
will
be
a
little
bit
bigger.
These
five
packets
is
very
different
than
the
five
milliseconds,
and
we
get
here.
Thirty
percent
marking
but
marking
doesn't
harm
and
we
get
still
a
good.
C
It's
an
urgent
yeah,
just
a
quick
question:
Dave
tap
of
a
button
that
I'm
York's,
distinguishing
between
the
two
queues
by
the
presence
or
absence
of
the
ECM
markings.
Yes,
so
if
someone
were
to
run
a
cubic
flow,
the
same
time
as
a
DC
TCP
flow
yeah,
what
happens
when
they
collide
in
your
second
queue?.
I
So
this
assumes
that's
Achilles
again
can
be
used
as
a
classified.
But,
of
course
that's
another
discussion.
We
might
need
different
mechanisms
to
identify
them
so
I
know
this
discussion
must
be,
must
be
done,
but
we
agree
there
must
be
found.
The
solution
so
maybe
not
necessary
to
do
this
discussion
now.
Okay,.
C
I
I
H
J
I
C
I
C
I
So
now
there
is
the
is
red,
but
of
course
we
this
this
tool
makes
the
CDF
of
all
easy
and
capable
traffic.
So
data
center,
this
piece
here
the
only
easy
incapable
traffic
there
is
drop
applied
to
this
is.
I
This
is
only
a
visualization
here.
There
is
one
queue,
but
this
shows
because
now
you
see,
the
queue
sizes
are
exactly
the
same.
The
drop
probabilities
are
the
exact
dates,
and
this
tool
only
extracts
the
packets
and
looks
if
it's
easy,
and
it
shows
it
in
here
if
it's
not
easy,
any
jerseys
in
there
or
even
for
coddle.
If
we
have
a
hundred
or
thousand
cues,
it's
just
a
classification,
but
for
our
case
these
are
really
dual
cuties.
These
are
really
the
two
queues
because
ecn
goes
in
here
and
on
the
end
there.
Okay,
we.
C
Could
even
I
just
want
to
make
one
point
mark
here,
because
you
said
that
everybody
uses
these
NS
GS
you
status
and
HTTP,
and
the
point
is
about
that.
Those
flows
using
the
service
should
use
a
data
center
like
congestion,
oh,
which
has
about
the
same
aggressiveness.
Aesthetically
no
TCP
doesn't
have
to
be
data.
Center.
Tcp
can
even
be
I,
think
their
solutions
will
make
it
simpler,
but
it
has
to
have
about
the
same
aggressiveness.
Yeah.
I
It
should
be
scalable
1
over
P.
The
soda
rate
should
be
proportional
to
1
over
P
instead
of
1
over
P
squared,
because
as
long
as
it's
1
over
P,
the
squaring
in
our
algorithm,
so
between
marking
and
dropping
thinking,
twice
exit
run
within
equal
x,
okay,
so
whatever
congestion
control
mechanism
or
algorithm,
you
have
in
a
TCP
as
long
as
it's
proportional
to
one
over
the
marking
probability,
then
it
will
work.
F
My
name
is
Joe
Cheshire
I'm
from
Apple
I.
Think
Dave
Todd
was
just
alluding
to
something
which
I
talked
about
on
stage
at
the
Apple
Developer
Conference
in
that
video
is
available
but
I'm
guessing
it
didn't
get
lot
of
publicity.
There's
a
lot
of
news
about
the
fact
that
apple
will
be
requiring
ipv6
for
applications,
which
is
kind
of
not
a
big
surprise,
because
we've
been
working
on
v6
or
20
years.
F
The
other
bit
of
news
that
didn't
get
mentioned
is
we
have
turned
on
ECM,
with
cubic
for
all
OS
10
and
iOS
devices,
that's
available
in
the
developer
seats
that
are
available
to
download
right
now
and,
of
course,
until
it
ships
to
the
public,
all
decisions
can
be
changed
and
right
now
the
point
of
having
that
in
the
seeds
is
because
we
want
to
do
testing.
We
know
that
99%
of
networks
carry
easy
n
traffic,
just
fine,
even
if
they
don't
do
any
marking,
at
least
they
don't
mangle
it.
F
What
we're
interested
in
is
the
other
one
percent,
because
one
percent
of
the
billion
iPhones
is
still
a
lot
of
support.
Calls
are
so
that's
the
purpose
of
having
it
in
the
seed
for
test
if
things
are
successful,
of
course,
I
don't
know
at
this
stage
whether
they
will
be
things
are
successful.
By
the
next
IDF
meeting
we
could
have
a
billion
iOS
devices
running
ecn
on
the
publican.
F
I
was
reporting
this
news
without
commentary.
I'm
surprised
not
get
me
rotten
tomatoes
thrown
at
me,
but
the
reason
I'm.
The
reason
I
want
to
draw
attention
to
it
in
this
room
is
because
this
is
an
experiment
at
this
stage
and
the
point
of
the
experiments
to
get
feedback
and
we've
got
precious
little
feedback
so
far,
which
I
think
is
lack
of
awareness
of
this
which
to
me,
is
much
bigger
news
than
v6.
So
please
spread
the
word.
If
you
haven't
done
it
yourself,
I
also.
C
I
I
If
you
were
saying
in
coexistence
in
data
centers,
so
income
all
the
environments
or
in
what
world
garden
deployments
like
we
proposed
its
data
center
too,
though,
but
for
public
internet
use,
I
want
to
say,
fall
back
to
reno
or
cubic
behavior
on
what
is
important,
of
course,
because
there
is
a
lot
of
network
equipment.
Not
only
doing
drop.
Ecn
is
a
less
problem,
because
there
is
not
much
easy
end
mechanism
routers
in
in
the
network
now,
but
if
there
are
ECM
enabled
routers
and
devices,
then
detection
of
this
classic
sen
routers
is
important.
I
Ecn.
Negotiation
issues
is
something
that
should
be
resolved
and
handle
a
window
of
less
than
two.
When
the
round
trip
time
is
very
low,
because
we
are
here
working
with
zero
queuing
delay,
it
means
that
it's
also
important
that
if
the
window
goes
below
to
and
now
there
is
a
minimum
window
of
two,
so
there
are
always
two
packets
in
flight.
If
it's
hundred
percent
Marty
cannot
go
less
and
that's
a
talk,
Bobby
is
going
to
give
also
in
TC
p.m.
I
G
David
scan
Ozzy
Apple
I,
wanted
to
ask
sir
about
the
public
internet
use,
there's
kind
of
a
big
issue
with
between
how
DC
TCP
interacts
with
regular
TCP
like
one
of
the
nice
things
about
is
Enda
were
testing
right
now
is
that
we
can
just
turn
it
on
and
most
networks
it
just
works,
and
that
way
we'd
say
all
right.
We
turn
this
on.
Then
we
can
go
to
the
network
operators
and
say
guys.
G
Now
you
have
an
incentive
to
enable
a
QAM
and
ecn
marking,
because
one
of
your
big,
a
lot
of
your
customers
have
devices
that
support
them
and
they
can
go
up
to
the
management
and
say
we
should
put
resources
into
this.
There's
some
motivation
was
right.
Now
they
don't
have
as
much
for
this.
It
sounds
like
you,
don't
have
a-
or
at
least
I'm
not
seeing
and
as
easy
way
to
have
an
incremental
rollout,
because
as
soon
as
you
put
dc-dc
p
on
the
wire,
it
just
wrecks
everything
else
that
outs
there
so
are.
I
Saying
that
you
fall
back
the
data
center
PCP
as
this
now
is,
there
are
some
easy
fixes
that
can
be
done
and
then,
of
course,
it
will
need
equipment
in
the
network
that
supports
this
square.
But
since
it's
a
very
simple
thing,
it
can
maybe
be
implemented,
maybe
even
in
existing
hardware,
don't
know
but
yeah.
I
The
other
problem
is,
of
course,
if
we
are
now
enabling
ecn
at
this
stage,
it
will
make
it
harder
also
to
migrate
or
distinguish
the
scalable
transport
protocols
which
make
a
perfect
use
of
easy,
and
so
it's
there
developed
to
use
ecn.
Now
we
are
going
to
deploy
with
ecn
for
old,
fashions
congestion
control
schemes,
so
it's
a
little
bit
of
it,
but
I
think
it's.
A
J
Okay,
ladies
and
gentlemen,
my
talk
is
about
global
synchronization
protection.
A
global
synchronization
happens.
Whenever
you
have
several
TCP
flows,
sharing
a
common
bottleneck,
then
the
flow
is
typically
do
is
a
congestion
window
reduction
all
at
the
same
time
or
at
least
half
of
the
flow
through
it
at
the
same
time,
and
this
causes
large
queue,
oscillations
and
large
delay
at
all
there
is.
The
protection
scheme
is
just
to
avoid
this
kind
of
synchronization.
J
It
is
very
simple:
it
sets
a
threshold
on
the
queue
and
the
special
can
be
in
in
delay
or
in
incisive,
secure,
and
if
the
special
is
violated,
it
drops
one
packet
and
then
it
sets
no
drop
in
the
wall
where
it
does
not
take
into
account.
Any
morals
is
special
and
it
waits
until
the
round
trip
of
this
one
drop.
It
causes
congestion
window
reduction
so
that
the
queue
drops
below
the
threshold.
J
The
algorithm
is
surprisingly
simple:
it's
just
two
questions
obvi
above
or
below
threshold,
and
the
second
question
is:
is
the
expiry
of
the
no
drop
into
wall
in
the
future
or
in
the
past,
if
you're
also,
both
the
questions
with?
Yes,
you
drop
the
packet,
which
is
a
10
queuing
almost
to
nothing,
and
then
you
set
a
new
expiry
value,
trust
the
value
it's
not
the
time
or
also
until
us
also
know
random
nests
or
or
something
else.
J
Next
next
slide
here
we
have
an
experiment
how
it
works
in
a
real
Ethernet
test
that
so
we
have
a
growing
hue
and
whenever
the
queue
reaches
for
the
first
time
the
/
hold,
it
rots
one
packet,
and
then
the
q
grows
a
little
bit
for
all
four
before
the
reduction
manifests
itself
in
the
in
the
queue
size,
and
then
it
starts
again
growing
and
interrupts
again.
So
this
is
the
so-called
single
drop
of
operation
of
this
algorithm,
but
well
we
can
have
more
aggressive,
traffic's
and
litters
and
is
expected.
J
We
can
have,
for
example,
more
flows
in
the
in
the
in
the
traffic.
They
can
have
much
more
much
shorter
round
trip
times
than
we
expected
or
we
can
have
more
aggressive
traffic
or
like
cubic,
for
example,
and
we
can
have
partially
unresponsive
traffic
and
that's
why
we
need
to
adapt.
This
is
no
drop
in
the
wall.
J
Oh,
the
adaptation
criterion
is
quite
simple
if
you're
more
times
above
the
special,
then
Bulow's,
and
we
obviously
need
to
need
to
make
a
shorter
in
the
wall
and
with
the
shorter
interval
we
finally
end
up
in
the
kind
of
periodic
dropping,
because
we
many
times
every
expiry
of
the
no
dropped
into
wall,
so
that
we
get
is
like
it
like
a
kind
of
a
bang-bang
control
around.
If
you
have
absolutely,
if
you
are
above
the
threshold,
we
drop
very
girly.
If
you
are
below
the
threshold,
we
drop
do
not
drop
any
at
all.
J
From
a
control
theory
point
of
view,
it
is
the
basic
control
loop,
which
is
this
three
statements
that
I
showed.
Initially
it
is
quite
stable,
introduced
and
outside
of
this
control
loop.
We
have
not
apt
asian
group
that
integrates
all
the
times
above
and
below
the
threshold
and
adjust
these
intervals
accordingly.
J
What
we
did
here
is
the
cumulative
probability,
distribution
from
function
of
of
the
different
actions
and
the
probability
distribution
function
for
for
best
operation
should
stop,
should
stop
at
zero,
meaning
that
we
do
not
have
to
never
happen
and
Hugh
is.
That
means
we
have
full
link
with
utilization
and
the
curve
should
reach
one
as
deeply
as
possible
to
F
trot
queuing
violence,
and
it
is
obvious
that
there's
more
flows
we
get
yet
again.
J
What,
but
what
is
this
is
surprising
here
is
that
we
see
with
this
global
synchronization
protection
we
say
see
almost
the
same
effect
as
with
coral
and
pie,
and
also
for
larger
numbers
of
flows.
It's
always
the
same
gain
a
comparator
to
tail
jobs,
and
the
most
surprising
thing
is
that
we
do
not
have
any
gain
compared
up
to
tell
Jo
up
with
a
single
flow
for
GSP
is.
J
This
is
obvious,
because
there's
a
single
flow,
we
cannot
have
synchronization
and
we
cannot
remove
any
synchronization,
but
the
coddle
plus
it's
the
same
way
as
tell-tale
drop
and
PI
spreads.
The
cue
even
more
large,
also
buy
a
dozen
does
not
ratios
is
zero.
The
sorcerer
pi
underutilizing
underutilized
CQ.
This
is
a
link
here
and
it
spreads
the
queue
even
worse
than
tear
drop
next
slide.
J
11
tests
that
we
I
mean
we
did
a
lot
of
tests
with
unsteady
traffic
conditions.
This
traffic
stalls
this
this
shade
changing
this
different
round
trip
times
this
changing
number
of
flows,
and
so
on,
and
so
on.
So
this
even
this
changing
the
train
right
after
of
the
queue.
Here's
one
experiment:
we
all
reddit,
UDP
traffic
unresponsive,
UDP
traffic
see
so
we
had
100
megabit,
TCP
traffic
and
inject
it
into
that
link.
Additional
90
megabit
per
second
UDP
unresponsive,
UDP
traffic.
J
Yes,
so
what
I
wanted
to
say
is
global
synchronization
protection,
even
if
it
is
only
intended
to
remove
the
global
synchronization.
It
does
essentially
the
same
as
the
other
aqm.
No
other
eqn
is
doing
anything
battle
in
that
case
and
GSP
is
a
minimalistic
solution.
So
it's
it's
just
try
statements,
and
we
have
a
traffic
on
this
here
and
we
have
a
paper
that
was
recently
published
at
HBS.
Our
and
Budapest.
J
A
I
would
like
to
encourage
some
discussion
around
this
algorithm
on
the
list.
Some
feedback,
especially
from
hardware
vendors,
may
be
interesting,
because
this
sounds
like
an
ATM
algorithm
that
is
best
suited
for
core
network
links,
rather
than
the
other
algorithms
that
we
have
discussed
so
far,
which
are
more
suited
for
h
and
the
edges
of
the
network.
This
is,
I
think,
I
conclude
the
today's
meeting.
Thank
you
very
much
for
hanging
out
here
for
10
minutes
longer
than
we've
had
so
one
last
mitten
pop
I
just
wanted
to
save
it.
You've
been
rattling.