►
From YouTube: IETF95-ICCRG-20160404-1400
Description
ICCRG meeting session at IETF95
2016/04/04 1400
A
Thank
you,
sir.
Yes,
that's
okay,
thanks,
I'll
I
get
to
study.
B
C
B
B
The
agenda
of
today.
There
are
four
talks:
David
Hayes
then
talk
about
congestion
for
recursive
networks
and
I'm,
not
going
to
read
the
title,
see
you're
gonna
see
them
all
anyway.
You
have
them
on
agendas
online,
but
at
the
end
we
have
what
is
now
a
15
minutes
lot
of
discussion,
and
it
would
be
great
if
I
slow,
poke
at
lunch
water.
So
there's
very
wishful
thinking
here.
This
real
speakers
might
even
be
faster,
very,
very,
very,
very
unusual,
but
we
can
always
hope,
because
we
have
some
new
ideas.
B
B
E
F
E
G
Controls
that
same
know,
probably
a
good
afternoon,
my
name's
David
Hayes
from
the
University
of
Oslo
and
I'll
be
talking
about
some
work.
We've
been
doing
on
our
on
a
clean
slate
internet
architecture
and
the
lesson
you
are
learning
from
that
its
joint
work
with
the
payment
and
michael
and
it's
on
the
pristine
project
now
pristine,
who
we
are,
what
we're
doing
it's
a
european
project.
That's
URL,
you
can
look
up
there.
G
We've
got
quite
a
number
of
different
partners
who
are
working
together
on
different
aspects
of
this
project
and
we're
working
on
something
called
rina,
which
is
the
recursive
internet
architecture
and
it's
a
clean
slate
type
internet
architecture
design
where,
hopefully,
we
learn
from
a
lot
of
the
lessons
of
the
present
internet
and
look
at
how
you
might
design
a
network.
Having
already
learned
those
lessons
and
some
of
the
things
we're
we're
doing,
is
we
have
some
tools?
G
G
Really
and
a
couple
of
the
things
are
actually
going
into
iso
and
look
as
if
they'll
progress
there,
as
well
as
that,
the
research,
which
is
what
we're
the
most
interested
in
here,
we're
looking
at
with
an
architecture
such
as
is
how
you
handle
the
scheduling
and
quality
of
service,
their
routing
congestion,
control
with
child
talk
about
here
and
security
aspects
as
well.
Oh.
G
Just
to
look
at
the
differences,
everyone
in
this
room
is
probably
very
familiar
with
the
left
hand
side.
Now,
that's
in
its
simplest
form,
it's
not
what
you
usually
meet,
because,
of
course
we
have
tunnels.
We
have
proxies.
We
have
middleboxes
say
that's,
ideally
what
it
is,
but
often
there
are
a
whole
lot
of
other
layers,
they're
repeated
layers
and
so
on
that
make
it
more
complicated.
A
recursive
architecture
takes
the
idea
that
well
a
lot
of
these
layers.
Some
of
their
functionality
is
the
same.
G
You
have
a
dressing
at
the
IP
layer,
you
have
it
at
the
transport
layer,
you
have
it
at
the
mac
player
and
so
on.
So
all
of
these
repetitive
thing
is,
rather
than
have
a
separate
layer
for
each
sort
of
function.
They
have
a
generic
layer
and
then
adapt
the
way.
Those
particular
layers
function
now
they're
two
examples:
rina,
the
one
we're
working
on,
and
that
was
John
days,
ideas
for
that
and
there's
another
one.
That's
very
closely
related
the
recursive
network
architecture,
which
Joe
touch,
I
think,
was
involved
with
that
one.
G
Now
this
example
comes
from
a
paper
that
peyman
will
be
presenting
an
ICC
symposium
in
a
few
weeks
time
where
we
have
a
simple
scenario:
just
the
two
hop
network,
what
it
would
look
like
in
the
layers
with
TCP,
if
you
use
something
like
split
TCP
and
with
rina,
that's
down
the
bottom.
So
with
rina,
we
have
just
the
two
layers
above
the
physical
layer
in
this
case,
and
they
look
almost
the
same.
G
They
called
dips
for
distributed
inter-process
communication
facility,
but
it's
just
a
general-purpose
layer,
and
in
this
case
they
do
congestion,
control
and
error
control
and
also
multiplexing
and
routing,
but
they're
the
same
sort
of
they're
the
same
layer.
So
with
this
we
had
two
levels
of
congestion
control
and
the
results
in
the
paper
they're.
Only
just
preliminary
just
comparing
the
two
architectures
show
that
there
are
performance
gains,
performance
gains
to
be
had
from
a
network
that
does
things
that
way.
G
Now,
just
before
we
go
into
the
congestion
control
bit
when
we
talk
about
congestion
control,
the
types
of
solvable
congestion
control-
in
inverted
commas,
if
you
have
congestion
somewhere,
your
network
can
adapt
to
it
in
rina.
That's
part
of
the
way
it
works.
It
can
adapt
to
the
congestion
and
rear
traffic
along
different
paths.
Okay,
you
can
restrict
traffic
coming
into
your
part
of
the
network
through
admission
control
or
please
signor
something
to
help
with
congestion
or,
as
isn't
the
case
with
TCP
the
end
systems
adapt
by
reducing
their
rates.
G
It
Nell
in
Rina
we're
looking
at
the
fact
that
you
can
have
congestion
control
under
congestion
control
linked
to
congestion
control.
It
has
advantages,
but
when
you're
stacking
and
staining
stacking
and
chaining
mechanisms
like
this,
of
course,
how
stable
is
that
going
to
be?
And
how
do
you
get
the
signals
from
wherever
the
congestion
might
be
to
the
different
entities
that
are
controlling?
G
G
This
is
just
an
example
of
a
host
connected
to
a
server
through
something
like
the
internet
through
a
couple
of
different
isp
networks
and
how
you
would
layer
it
in
Reena,
so
they
would
be
called
dificil
Reena,
but
they're
just
sort
of
generic
layers,
and
you
would
probably
just
have
flow
control
at
the
bottom
red
ones.
Maybe
congestion
control,
maybe
not,
but
at
the
green,
blue
and
purple
you
would
have
full
error,
control
and
congestion
control
operating.
But
each
layer
is
roughly
the
same
sort
of
thing.
It
only
differs
in
what
policies
you
provide
it.
G
G
Back
and
push
up
to
do
with
queuing
mechanisms
we
have
packet
marking
and
reflection.
We
also
have
control
packets
that
we
can
send
to
indicate
different
things,
and
in
this
case
the
green
layer
would
be
doing
congestion
control,
managing
the
congestion
there
so
with
the
blue
layer
and
the
end
host
may
also
be
doing
congestion
control
as
well.
G
Potentially
that
actually
can
provide
much
faster
responses
to
congestion
and
means
that
the
network
can
have
much
smaller,
buffets,
so
improved
latency.
Of
course,
the
issue
that
some
of
you
are
probably
looking
when
they
see
that
is
well.
How
all
of
these
congestion
control
thing
is
going
into
relate
to
each
other,
and
is
it
going
to
be
stable
or
just
oscillate
all
over
the
place?
G
And
that's
what
we're
looking
at
as
part
of
this
project
and
our
initial
results
say
that
is,
if
you
manage
your
feedback
properly
for
your
congestion
controllers,
you
actually
do
get
the
benefits
without
the
instability.
If
you
don't,
then
you
can
can
get
that
now.
The
other
interesting
thing
we're
looking
at
is
new
congestion
control
mechanisms.
Now
this
one
was
inspired
by
logistic
growth
in
nature.
First,
just
to
show
that
we
do
go
back
further
than
10
years
in
this
sort
of
research.
G
This
was
1838
when
it
was
first
published
and
which
looks
at
how
in
nature
populations
increase,
but
there's
often
a
limit
to
what
they
can
do
and
how
that
maps
out
in
terms
of
a
network.
The
capacity
would
be
our
K
there.
A
limit
to
our
population
growth
and
the
first
part
of
the
curve
is
actually
exponential.
It's
a
bit
like
slow
start,
except
you
can
manage
the
rate
I'll
be
exponential.
G
G
So,
with
with
this
mechanism,
transferring
it
to
a
congestion
controller
which
is
working
with
rates,
we
calculate
our
rate
based
on
our
last
rate,
the
our
parameter.
There
is
like
a
birth
rate
or
or
in
some
sense.
How
fast
you
can
increase
factor,
see
the
capacity
then
the
tricky
part,
which
is
why
it's
hard
to
do
this
sort
of
thing
in
the
internet,
but
it
can
be
done.
You
know
in
slate
type
architecture,
is
we
we
have
an
estimate
of
what
the
other
sources
over
this
link
are
doing
as
well
as
the
number
of
them.
G
Moving
on
from
that,
we're
looking
at
predator-prey
food
chains
as
inspiration
for
for
congestion
control,
because
they
had
lots
of
interactions,
they're
chained
and
they
can
be
stacked
and
the
stability
of
them
and
the
models
them
can
can
work
their
enhancements
of
this
one
to
provide
a
congestion
control
mechanism
that
works
well,
when
it's
chained
to
other
congestion
control
mechanisms
as
well
as
stacked,
and
that's
an
ongoing
area
of
work
with
us.
At
the
moment.
G
Ok,
so
where
are
we
going
and
how
does
this
relate
to
the
current
internet?
Well,
those
layers
and
feeding
back
congestion,
control
mechanism,
congestion,
control
indications,
although
it
doesn't
look
like
the
internet
that
actually
happens
in
the
internet,
we
have
congestion
at
Mac
layers.
We
have
error
control,
not
just
a
TCP
at
other
layers,
underneath
they
all
interact
sometimes
badly
in
the
internet.
G
B
Alright,
we're
going
to
do
something
very
unusual
before
we
allow
any
questions,
and
that
is
that
we
well
it's
my
fault
really.
I
did
forget
to
ask
for
me:
it
take
us
which
no
harm
done
yet,
but
as
soon
as
the
first
question
is
a
question
is
issued,
we
have
a
problem,
so
we
need
a
minute
taker
is
aaron
here,
because
I
know
we
did
make
a
deal
and
I
wrote
my
past.
Others
may
take
us
come
on.
B
B
H
Ok
Bob,
please
go
Oh
crisco
you'd
say
that
these
layered
congesting
controls
exist
in
the
current
internet.
I.
Try
to
look
for
things
like
that.
I've
not
found
anything
I
mean
yes,
there's
error,
correction,
there's
now
Q,
but
I.
Don't
think
this
congesting
controlling
we're
right,
for
instance,
ATM
frame
relay
had
the
ability
to
do
that
sort
of
thing,
but
but
when
the
you
don't.
H
D
On
the
chaining
of
congestion
control
on
the
chaining
of
congestion
control,
that
is
something
slightly
different
than
just
having
TCP
connections
chained,
because
the
ultimate
source
of
data
still
is
at
the
far
end.
All
your
signaling
has
to
go
back
all
the
way
to
the
sender.
To
the
original
sender.
Are
you
not
able
to
hear
me
he's.
G
D
Yeah
there
you
go,
you
can
have
a
conversation
here
at
this
mic
so
you're.
Basically,
when
you
have
change
a
condition,
control
loops,
you
still
have
to
pass
any
condition
signals
that
are
even
the
network
all
the
way
back
to
the
sender,
because
your
original
source
of
data
has
to
be
notified
to
stop
sending
somehow
I'm,
not
if.
G
It
if
it
is
a
if
it
is
a
source
cities,
congestion
control,
rather
than
a
different
type
of
source,
that
maybe
just
admission
controlled
sure,
but
but
yes,
the
bad
book.
That
is
right
and
that's
where
it
can
be
tricky
and
how
you
wanting
what
congestion
information
you
get
so
that
the
source
gets
everything
it
needs,
which
sort
of
relates
to
some
of
the
stuff.
That's
happening
here
when
we're
trying
to
get
stuff
from
the
mac
layer
and
congestion
happening
there
and
easy
n-type
signals
that
propagate
all
the
way.
G
D
G
D
Yeah,
okay,
but
if
they
are
operating
at
the
same
time,
skill
you
will
still
have
veered
interactions
between
the
loops
if
they're
operating.
D
G
D
The
coupling
is
very
important
I
would
I
would
point
out
and
and
it's
important
to
consider
how
much
latency
gets
introduced
when
you
have
that
cut,
because
usually
when
he
have
these
multiple
loops,
you
can
introduce
instability.
If
you
don't
have
a
lot
of
buffering.
But
if
you
have
a
lot
of
buffering,
you
end
up
at
getting.
G
D
D
B
B
E
F
B
F
Most
of
these
I
spins
members
of
the
chambers
are
suffering
from
for,
say
it's
a
ver,
Buffalo
ting.
We
can
see
at
the
at
the
figures
here
that
the
granted
China
or
the
connections,
for
example.
These
are
connections
going
from
the
caches.
Like
a
comma,
your
google
caches
to
the
end
users
in
the
membrane
at
the
networks
of
the
members,
and
they
are
suffering
of
repute,
times
of
sometimes
half
a
second.
Sometimes
a
quarter
of
a
second
you
can
see.
There
are
very
high
minimum
grunting
times
and
large
are
still
average
grunting
sites.
F
Sometimes
the
ranting
times
are
stable
because
bottlenecks
are
exclusive
for
a
single
user,
and
sometimes
bottlenecks
are
shared
by
many
users,
and
you
see
this
time
variation
in
the
relative
times
because
of
the
sharing
and
the
and
the
overloading
that
happens
at
peak
hours.
So,
let's
see
the
traffic
profiles
for
regular
I
space,
the
generate
these
situations.
F
F
F
F
By
this
because
respected
that
we
were
going
to
find
no
no
traffic
limited
by
the
by
the
receivers
in
the
in
the
figures.
On
top,
we
see
traffic
coming
from
crash
to
end-users.
This
time
we
also
have
this
yellow
traffic
on
top,
that
is
UDP
traffic
limited
by
congestion,
control,
adduser
space
as
quick.
We
are
not
I,
think
what's
going
on
with
with
that
congestion
control,
but
it
is
significantly
in
traffic
from
Google
and
on
the
upstream
side
and
right.
We
see
that
most.
F
J
J
J
F
So
when
looked
inside
the
thrill,
the
added
support
for
these
connections
for
David's
the
congestion,
consistent
controlled
by
the
standards
connections
and
the
flow
control
by
the
receiver
connections-
and
we
found
that
they
were
experiencing
very
similar
average
Rupert's.
You
can
see
the
top
the
top
figure
we
fought
for
that
is
iced.
C
F
C
F
Without
a
causing
this
rough
road,
but
when
they
share
iconic
apartments
with
regular
connections,
they
start
that's
why
they're
called
they're
generally
called
less
than
the
best
effort
as
Mike
I've
been
teaching
all
all
of
us
about
the
all
these
options
and
another
way
to
combat
buffalo
thing
that
is
very
popular
right
now
is
a
qm
q
management,
but
most
we
found
that
most
of
all
20
nights.
Peace
were
not
planning,
even
planning
to
use
it,
or
maybe
they
don't
have
yet
studied
these
options.
F
We
really
don't
even
they
are
not
even
involved
in
this
program
of
a
protein,
but
inducers
were
still
having
problems
when
I'm
feeling
the
problems
this
of
butter
protein
can
be
filled
by
the
users
when
they
download
I
when
they
surf
the
wave,
because
usually
what
pitches
are
composed
of
many
many
short
fights
a
lot
of
pictures,
and
that
means
a
lot
of
short
transactions
for
this
short
transactions.
Latency
is
more
important
than
bandwidth,
so
we
experienced
with
many
options
to
to
build
this.
F
F
Variable
for
for
getting
feedback
from
the
network
that
we
call
it
say.
Yes,
the
purpose
of
this
feedback
value
is
to
estimate
a
share
of
the
shunt
available,
connect
capacity
that
our
connection
is
getting
and
we
calculate
this
variable
as
the
proportional
rate
response
to
inflect
size.
Variations
to
in
flight
data
size
variations
generally.
The
in-flight
latest
dice
is
limited
by
the
congestion
control,
a
congestion
window
at
the
center,
but
can
also
be
limited
by
the
receiver
window
at
and
the
receiver.
F
C
F
F
F
D
F
F
F
We
were
where
our
objective
was
to
prevent
buffer
bloating
if
possible,
but
it's
not
always
possible
when
the
queue
is
already
an
overloaded
with
packets,
there's
no
way
to
bring
the
round-trip
time
down.
So
we
said
in
those
cases
to
Roberto
regular
receiver,
receiver
window
management
a
so
we
study
same
threshold
and
which
window
Rose
was
allowed
only
when
the
X
a
valuable
was
above
that
red
threshold
and
the
receive
window
went
down
when
we
detected
at
the
connections
were
leaving
the
bottleneck,
and
this
big
windows
were
not
needed.
F
We
can
see
here
I'm
going
to
show
you
just
a
few
raps
about
how
these
connections
evolved.
On
the
left
side,
we
see
a
regular
connection
with
a
cubic
center
and
a
DRS
receiver,
dynamic,
right-sizing
receiver
and
the
we
can
see
both
connections
at
the
top,
how
they
grow
their
sequences
fight
sequences.
In
the
same
time,
they
both
get
the
same
rate.
Both
of
them
are
alone
at
the
audubon
denied
this
time
at
this
test,
and
we
can
see
the
rounded
time
for
the
space
connection.
F
This
should
be
familiar
for
all
of
you
who
study
cubic
how
the
granted
I
grows
until
it
finds
the
first
packet
loss,
and
then
it
grows
again
in
the
second,
on
the
condition
on
the
right
the
rocket
times
the
RAM
duck.
Time
is
curved
by
the
receive
window.
That
is
not
allowing
the
number
of
unacknowledged
bites
to
grow
much
about
the
bandwidth
delay
product.
That's
on
the
on
the
lower
figure
on
the
bottle
figure
in
the
inset
there.
F
This
time
we
are
seeing
for
connections
first,
one,
the
purple,
one
is
the
one
using
the
Palermo
receiver
and
the
other
three
are
using
the
typical
dynamic
resizing
concession
on
a
receipt
window.
Adapt
adapt
in
the
first
connection
achieve
a
low,
rounded
time
until
the
second
and
third
and
fourth
connection
arise.
F
Then
your
auntie
Thailand
starts
to
grow
and
at
this
point
here,
the
connection,
the
first
connection
reverse
to
regular,
receive
window
management,
and
it
starts
growing
its
with
the
window
and
the
corresponding
number
of
an
adult
lights
start
flowing
same
way
as
they
go
between
regular
connection,
the
two
regular
under
any
receivers.
We
see
that
the
full
connection
at
first
loses
some
some
capacity
and
then
it
sort
of
equals.
The
share
that
is
achieved
by
the
second
connection.
L
F
C
F
L
E
C
F
C
F
F
F
We
see
on
the
top
top
figure
that
at
small
I
download
length
it's
the
same,
but
when
the
sides
of
the
download
file
starts
growing,
the
average
granted
times
for
the
downloads
get
higher
higher
for
the
for
the
rail,
that
is
the
dynamic
right-sizing
receiver
and
for
the
Palermo
receiver.
That
is
when
the
the
proxy
had
this.
This
algorithm
enabled
the
roti
tag
was
constant,
almost
constant
and
a
PI,
a
price
cut
has
to
be
paid
for
this.
It
was
not
free.
F
The
average
throughput
for
the
connections
for
these
large
conventions
went
down,
but
they
didn't
start.
The
price
was
about
twenty
five
or
thirty
percent
for
long
connections
and
depend
depended
on
on
the
also
on
the
on
the
traffic
conditions,
but
never
when
below
sir
concerning
servants
at
least,
but
why
would
a
receiver
be
willing
to
pay
this
price
on
the
next
test?
We
are
going
to
see
or
yes,
this
is
another
test
that
was
made
this
way
on
the
same
proxy,
we
ran
twice
each
dunloe
several
times
the
first
time.
F
The
alarm
over
see
the
first
time
the
DRS
receiver
was
enabled
and
the
second
time
palermo
reception
was
enabled,
and
in
parallel
there
was
heavy
traffic
and
Larson
notes
a
from
from
Colonel
mirrors,
and
this
this
download
said
we
made
for
the
match
mix
where
major
newspaper
pages
that
that
those
are
pitches
composed
of
many
small
files,
for
example.
We
can
see
for
for
this
one.
This
is
a
Washington
Post.
This
unload
was
only
2.1
megabytes
and
those
were
30
35
and
without
the
with
the
erase
receiver.
E
F
Distance
paper,
the
download
time
came
down
from
40
seconds
to
less
than
20
seconds
and
for
Miami
Herald.
He
came
down
from
11
or
12
seconds
to
24
seconds,
so
this
happened
because
the
many
files
that
compost
dispatches,
for
example,
a
handle
files
for
marine
68
for
The
New
Yorker
files,
65
for
Miami
Herald,
those
who
were
a
lot
of
sections
whose
performances
were
affected
most
by
latency
and
latency,
came
down
down
when
the
proxy
was
using
the
Palermo
protocol.
So
the
price
pace
for
the
town
downloads.
F
C
J
F
E
C
F
M
Hi
rail
Jessup
Mozilla,
so
I'm
not
too
surprised
that
there's
a
cost
to
be
paid
for
this,
at
least
at
at
the
high
end
of
transfer
sizes.
That
would
tend
to
imply
the
people
who
our
expecting
to
do
large
transfers
may
avoid
it.
The
gain
at
low
transfers
is
very
interesting
and
nice,
but
perhaps
a
little
surprising
as
to
the
magnitude
of
the
gain,
and
my
question
would
be
have
you
looked
at
all
of
these,
were
we
have
a
lot
of
smaller
files
to
transfer
and.
M
I
F
F
B
C
B
Because
the
pointer
is
about
congestion
at
all,
mainly
so.
This
is
I'm
listed
here
with
on
the
line
because
I'm
presenting,
but
those
which
are
foolish
loan
and
Christian
guilt
and
chunky
you.
Why,
by
the
motivation
here,
is
that
we
want
to
be
able
to
combine
the
congestion
controls
of
multiple
flows
going
from
one
host
to
another
host,
so
the
same
pair
of
hosts
across
the
internet
and
well.
We
can
see
that
combining
these
congestion
control,
rather
than
letting
them
fight
it
out
on
the
network
and
get
some.
B
B
There
are
have
been
other
proposals
and
Robbie
this
custom
and
draft
as
well
generally,
it
can
be
made
easier.
Our
goal
has
been
to
minimize
the
changes
that
you
need
to
do
to
the
tcp
calls,
and
the
problem
here
is
one
problem
is
that
we
believe
you
cannot
simply
do
that
with
multiple
TCP
connections.
Because
of
ecmp
you
have
equal
cost,
multipath
or
similar
methods
in
a
network
that
would
spread
connections
across
different
parts,
and
then
they
may
not
share
the
same
matter
of
bottleneck
which
may
need
some
some
way
of
putting
them
together.
B
B
It's
one
of
you
Tom
how
about
said
that
we
could
use
the
ipv6
flow
table
or
we
could
use
to
narrow
QP
encapsulation
already
want
to
ask
a
question:
what
do
people
think
about
this
ipv6
floor
level?
Think
and
I
rely
the
center
side
if
I
just
use
multiple
connections
to
the
same
destination.
Ip
address
and
I
give
them
the
same
floor
level.
Are
there
going
to
be
treated
along
the
same
path?
I
guess
that's
the
semantics
of
the
floor
label,
but
this
is
like
is
that's
something
that
was
will
really
do.
L
B
L
So
UDP
guidelines
now
recommends
that
that
we
use
the
full
label,
as
well
as
the
port
semantics
for
load
balancing
in
ecmp,
so
I
take
some
equipment
to
use
the
ports
and
so
equipment
to
use
the
floor
level.
Currently,
ok
and
the
intention
was
always
to
used
a
level
this
didn't
get
happen.
Then
people
start
using
easy
MP.
They
know
need
the
floor.
Labels
are
in
a
transition
process,
so
I,
don't
know
whether
you
get
consistent,
behavior
or
not.
I
suspect
no.
B
B
Don't
know
what
the
trade
Australia,
but
the
point
is
you
probably
want
to
put
them
together
across
the
same
bottleneck
so
because
the
eventual
goal
really
is
to
do
the
coupling
congestion
control
I
want
to
present
some
results
here
on
how
we
do
give
a
high
level
view
of
the
I
bread
and
there
is
an
algorithm
in
the
draft.
So
if
you
want
to
know
the
details,
it
really
is
there
I
think
it's
a
bit
of
a
waste
of
time
to
go
through.
You
know
every
line
in
the
diagram
step
by
step.
B
First
of
all,
this
is
based
on
a
similar
mechanism
that
we
had
in
the
RM
cat
working
group
that
was
called,
which
is
in
the
draft
code.
Draft
ITF
or
mcat
couple
t
see
where
we
have
essentially
a
table
that
keeps
track
of
multiple
floors,
and
you
have
things
going
in,
and
things
going
out,
the
table
contains
priorities
for
every
floor,
and
then
it
assigns
a
share
of
the
congestion
window
that
is
based
on
the
share
of
the
floors
priority
of
the
total
sum
of
priorities.
B
B
So
this
is
about
how
the
best
way
to
share
is
designed,
and
there
is
some
logic
that
reacts
whenever
or
updates
its
congestion
window.
Whenever
that
happens,
then
we
do
something
and
that
something
is
based
on
difference
between
the
previously
stored
congestion
in
the
value
and
new
value
that
applause
just
calculated.
B
There
are
some
tea
speakeasy
specific
specialties
that
we
need
to
do
here.
So
the
RM
cap
mechanism
is
a
little
bit
more
generic
and
simpler,
but
then
TCP
because
of
the
states
it
is
if
you'd
had
TCP
has
we
have
to
do
some
specific
things
here.
One
thing
we
did
is
to
say
that
we
only
enter
slow
start
if
that
happens,
to
all
flows,
because
essentially,
if
we
as
long
as
we
backs
along
with
general
TCP
congestion
control
logic,
we
shouldn't
be
in
slow
start.
B
So
if
one
floor
keeps
receiving
X,
but
another
flow
has
gone
to
slow
start,
we're
not
going
to
go
what
start
with
the
overall
behavior
and
then
there's
also
this
notion
of
velocity
event:
tcp
tries
to
avoid
reacting
multiple
times
during
one
loss
event.
We
emulated
that
behavior
in
RM
cad
with
using
a
timer,
but
in
tcp
that
time
was
necessary
and
really
we
do
have
TCP
doing
this.
B
So
this
is
the
fast
recovery
phase
and
we
used
that
face
instead
of
using
a
timer
and
generally
the
behavior
that
we
have
picked
a
slightly
more
conservative
than
the
algorithm
and
after
I
ITF,
our
own
cat
couple
tees
the
result
of
using
it.
That
is
just
simulation.
So
far,
but
we
do
have
what
we
will
have
called
off
the
encapsulation
that
I'm
going
to
show
you,
but
yeah
congestion,
sure
I've
written
so
far
is
only
in
simulations.
B
There
are
a
few
see
a
few
other
ones
using
this
case
for
reno
flows
across
a
10
megabit
per
second
bottle
neck
was
not
here,
100
milliseconds
and
queue
length.
That's
the
bandwidth
delay
prologue
three
packets.
This
is
with
back
with
background
traffic
using
tea
mix
and
this
just
to
visualize
the
behavior
of
24
flow,
as
you
can
see
that
they
fight
against
each
other
well
have
a
very
strongly
fluctuating
behavior
on
the
left
and
things
look
essentially
like
one
flow
on
the
right,
which
also
means
this
likely
as
aggressive.
B
B
Well,
nothing
very
surprising
about
that
right,
so
we
emulate
the
behavior
of
one
flow
as
opposed
to
fall,
flaws
that
shouldn't
use
so
much
curing
it
shouldnt
produce
so
much.
Loss.
Prioritisation
also
works
very
well.
That's
a
graph
that
well
we
intended
to
make
a
theory
line
as
well,
but
it's
always
the
same
with
these
graphs
that
it's
so
precise
that
there
is
no
point
putting
in
a
second
line.
You
wouldn't
see
it
because
we
enforce
you
know
a
priority
share
on
the
center
side.
So
you
get
precisely
the
window.
B
L
B
And
we're
working
on
more
results,
that's
just
some
early
view
on
the
congestion
tour,
we're
working,
we're
working
on
a
paper
and
we're
going
to
share
the
results
as
soon
as
we
can
for
the
encapsulation
here,
the
idea
was
to
put
multiple
TCP
flows
in
and
UDP,
so
this
was
heavily
inspired
by
other
draft
pesto,
Cheshire
and
and
another
earlier
draft
that
similar
thing
in
it.
It's
almost
the
same.
We
it
suppresses
TCP,
checksum
and
urgent
pointer
field
and
set
zero
for
the
urgent
flag.
We
do
the
same
here.
It's
a
process.
B
This
TCP
source
and
destination
lead
this
nation
pots
to
be
able
to
fit
the
whole
TCP
packet
inside
the
UDP
packet,
without
increasing,
while
reducing
the
MSS.
We
do
that
as
well,
but
one
difference
that
we
have
here
is
that
we
actually
preserve
the
semantics
of
the
port
numbers.
So
idea
is
that
a
user
or
like
an
application
program,
application
running
over
that
would
use
ports
and
put
seaports,
and
everything
will
be
just
the
same
as
before.
Just
on
the
wire,
these
pods
will
be
encoded
in
a
different
way.
B
So
we
need
to
pause
in
some
way
and
what
we
did
is
we
made
a
flow
ID.
Actually,
the
draft
calls
it
the
connection
ID
that
multiplexes,
what
can
multiplex
up
to
32
power
connections,
because
we
just
sold
the
reserve
bits
and
basically
we
can
do
whatever
we
wish
with
this
hat
anyway,
because
inside
UDP,
so
we
took
these
bits
zooming
over.
It
is
going
to
won't
want
to
use
them
inside
TCP,
because
if
you
want
to
play
with
TCP
as
an
extension
inside
UDP-
and
you
may
probably
when
I
had
an
option
and
we.
B
Whatever,
as
we
wish
with
options,
because
we
can
play
so
nicely
with
options
also
on
syn
packets,
we
use
a
sin
option
and
it's
an
option
to
map
parts
to
this
flow
ID.
So
it's
an
initial
message
telling
the
other
side
what
authority
is
going
to
be
for
the
for
this
pop
here
and
then
both
sides
know
that
it's
just
stayed
on
the
same
and
receiver
side
and
we
can
easily
put
options
on
sins
because
it's
all
in
UDP,
we
don't
have
problems
with
middleboxes.
B
This
protocol,
because
you're
the
first
people
to
know
it
exists
yeah.
There
is
also
some
more
some
of
our
stuff
in
here.
The
offsets
has
shifted
to
a
different
place
and
that
can
help
identifying
other
protocols
like
are
also
distinguishing
between
that
and
stung.
That's
a
gun
is
from
stuarts
draft
and
I.
We
just
copied
that
over.
B
So
the
setup
for
doing
this
multiple
flow
thing
here
is
that
we
would
happy
I
ball
for
TCP
UDP,
and
but
we
put
the
mapping
like,
I
said
in
the
cinnamon
Cenac,
and
because
we
write
this
code
and
the
whole
idea
of
this
thing
is
that
it's
a
simple
code
change
in
the
colonel.
We
can
also
put
things
in
the
right
sequence,
so
it's
essentially
have
a
normal
TCP
machinery
doing
its
TCP
thing
and
just
because
before
it
sends
out
the
packet
there
is
this
thing
kicks
in.
B
If
we
can
do
it,
you
know,
if
you
can
do
it,
we
do
it
first
with
the
UDP
packet.
So
in
this
way
we
can
we
can
guarantee
that
if
we
don't
get
the
UDP
response
at
first,
it's
either
reordering
and
inaccurate
in
a
network
or
the
UDP
packet
was
lost,
which
should
greatly
reduce
the
chance
of
is
becoming
a
problem.
You
know
that
you
may
get
a
tcp
response
versus
then
don't
know
that
this
TTP
in
UDP
actually
works,
because
least
in
processing.
In
the
kernel
we
can,
we
can
ensure
the
correct
sequence.
B
B
This
encapsulation,
they
will
just
work
whenever
UDP
works.
It
has
some
benefits
related
to
stand
again,
I'm
fighting
pointing
at
stuff
and
possible
to
use
other
transfer
polar
coastal
with
the
same
pointer.
I
would
like
to
say
that,
because
that
you
know
the
draft
about
spot
requirements,
it
said
something
about
issues
with
inline
spot
usage
and
that
turned
out
to
be
not
a
big
problem,
because
when
we
use
tcp,
then
the
head,
the
size
of
MSS,
can
I
can
change
anyway.
B
So,
as
you
constructed
TCP
header,
depending
on
the
number
of
options
that
are
there,
you
will
have
to
calculate
the
correct
size.
So
as
long
as
the
processing
of
spot
is
known
to
the
tcp
code,
you
just
construct
you,
you
just
calculate
the
right
size
for
the
for
the
payload
there's,
actually
nothing.
You
need
to
do
really.
You
can
consider
a
spot
head
or
whatever
header
will
be
before
it
is.
If
you
had
a
slacker
tcp
option
that
comes
before
tcp.
B
B
And
prophecy-
and
it
does
prevent
us
in
ECM,
b-but
ecmp-
can
be
a
good
thing.
That's
another
issue
right,
it's
not
necessarily
because
idea
to
try
to
prevent
it.
For
now
we
have
it
as
a
socket
option.
Maybe
it's
only
interesting
to
use
that
when,
when
you
expect
that
there
will
be
short
flows,
that
could
be
a
bigger
benefit
because
short
flow
can
quickly
use
a
larger
window.
The
longer
flow
is
already
obtained,
so
it
could
be
us
dependent
decision,
I,
don't
know,
that's
that's,
and
if
it
till
it's
a
trade-off
and
using
that.
B
So
to
conclude,
we
have
finished
the
implementation
of
encapsulation
for
free
based
for
the
freebsd
kernel.
This
whole
thing
is
in
the
works,
but
this
call
is
actually
already
done
working.
The
couple
CC
code
is
on
the
development
or
a
simulation
called
and
rudimentary
code
is
already
being
developed
for
the
freebsd.
Zor's
are
going
to
be
very
hard
to
incorporate
that
with
trying
to
progress
pretty
fast.
With
that,
it's
all
who
also
thank
you
to,
however,
for
this
and
questions.
L
L
K
B
C
K
Yeah
yeah
yeah,
it's
not
a
trade
of
microcosm.
You
said
that
you
protest
I
mean
on
the
server
side.
You
purchase
the
tcp
/
UDP
back
out.
First,
how
do
you
ensure
that
you
get
it
first,
I
mean
even
if
it
is
right,
I
mean
on
the
server
side
you
may,
on
the
client
side,
you
make
sure
you
sent
the
tcp
and
UDP
packet
first,
it
might
be
received
there,
but
Nick's
distribute
the
packets
over
multiple
course.
How?
How
do
you
know?
B
I
But
I
mean
that's.
Why
we're
here
we're
here
to
solve
problems
yeah,
so
hi,
Brian,
Trammell,
cool
stuff,
already
said
this
on
this
tacky
bovis,
I'm
going
to
say
to
your
kid
here
again
to
I,
went
to
basically
just
tune
an
advertisement
for
the
next
session,
so
you
were
looking
at
the
blocking
and
UDP
and
80
and
443
and
not
80
and
not
for
43.
We
have
a
talk
about
that
in
map
RG,
which
is
in
the
next
session.
Can
we
run
the
internet
over
UDP?
The
spoiler
is
yes,
probably
if
we're
careful,
yeah.
D
Thanks
cool
generally
in
good
I
took
wistance,
won
all
the
benefits
of
doing
a
share
in
the
condition
manager,
for
example,
I.
Actually
don't
remember.
This
was
true
in
the
condition
manager
or
not,
but
getting
shared
lost
addiction
and
recovery.
It's
quite
nice.
We
have
a
lot
of
short
flows
and
that
actually
contributes
to
saving
a
lot
of
latency.
D
So
in
this
case
you
don't
you
don't
seem
to
have
that
in
here
seem
to
have
shared
condition,
control
but
not
checked,
because
you
can't
really
easily
have
that
what
you
don't
have
shared
lost,
detection
and
recovery
like
if
you
have
multiple
flows,
you
can't
use
data
from
11
connection
to
kick
off
a
fast
retransmit.
Allow
the
connection
you.
B
B
D
B
D
The
be
that
is
space
here.
To
also
do
that,
you
aren't
doing
it
right
now,
but
it's
possible
to
do
that.
Yeah
from
a
sender
view
when
you
have,
when
you're
sort
of
sending
all
of
your
traffic
through
one
siphon,
you
can
detect
VK.
You
know
which
package
sent
before
which
packets
in
which
packets
are
sent,
after
which
packets
and
if
you
have
the
ability
to
track
that
time,
sequencing
of
packets
across
multiple
flows.
You
can
figure
out
how
to
trigger
pass
retransmissions
earlier
than
you
would
have
otherwise
I
see:
yeah,
okay,
okay,.
D
Clothes
then,
back
to
the
HTTP
rho
DP,
to
context
where
timeouts
dominate
latency
and
things
of
that
sort
and
things
like
they
lost
probe,
helped
a
lot.
He
basically
reduce
the
number
of
tails
you
have
for
loss,
recovery,
yeah,
and
this
was
a
long
way
in
reducing
latency.
So
that's
a
bit
of
feedback
on
something
else
that
could
be
added
in
there.
The
second
questioners
took
shared
the
coupling
condition
under
coupling
I.
Don't
think
requires
the
framing
of
UDP
right.
F
B
I
I
think
I
think
you
want
to
separate
it
out.
Okay,
because
of
all
the
feedback
we
got.
I've
also
got
different
ways
of
encapsulating,
so
you
could
be
using
that
flow
label.
You
could
be
doing
generic
European
capsulation.
You
could
not
do
anything
and
not
care
about
it,
or
maybe
just
assume
that
the
you
know
bottleneck
is
on
the
receiver
side
because
to
receive
it
tells
you
if
you
had
have
all
kinds
of
ways
of
doing
it,
so
I
think
you're
almost
played
it
like
that.
B
E
E
Should
be
fast,
I
met
sergeant
I
was
wondering
if
this
has
any
parallels
with
the
multipath
TCP
effort
and
sort
of
the
way
with
a
couple
congestion
control
across
multiple
clothes.
It's.
B
A
little
different,
it's
a
test,
certain
similarities
and
then
certain
ways
is
different.
Yeah,
it's
not
it's
not
the
same
with
a
share
the
goal
of
trying
to
be
like
one
tcp.
Basically,
you
know
in
terms
of
other
congestion
operates,
but
it's
now
like
you
want
to
be
a
little.
You
know
this
whole
resource,
pulling
where
you
wonder.
If
you
call
over
different
parts,
you
want
to
be
able
to
shift
traffic
towards
the
path
that
has
these
congestion.
That
is
not
how
this
congestion
to
operate.
So
it's
it's
coupling,
but
it's
slightly
different
coupling
sure.
E
I
thought
there's
also
the
absent
some
folks
are
using
the
silver
essentially
to
single
home
machines
and
they
have
the
option
to
open
multiple
sub
clothes
across
one
IP
address
on
machine
a
mwen
IP
address
on
machine,
be
then
you
end
up
in
a
somewhat
similar
scenario.
I
think
it's
not
always
necessarily
two
interfaces
or
two
unique
pads.
E
B
N
B
B
B
O
O
C
O
They
keep
and
that's
based
on
how
much
are
fast,
the
queue
size
gauges
and
that's
multiplied
by
a
factor
better.
So
if
the
queue
size
jumped
big,
then
you
also
need
a
big
compensation
for
an
extra
probability.
So
that's
a
proportional
part.
If
that
has
covered
all
the
variations,
then
of
course
there
is
an
offset
from
your
target
because
in
pie
you
want
to
have
your
fuel.
O
O
Okay,
next
slide,
so
an
important
step
is
choosing
those
alpha
and
beta
gain
factors,
and
so
the
higher
offer
and
beta
factors
give
a
faster
response.
But
of
course,
to
be
stable.
Your
face
and
gain
margins
must
be
above
zero
because
otherwise
they
become
unstable.
And
here
you
see
a
few
curves
with
different
alpha
and
beta
actors,
and,
as
you
can
see
this,
this
game
margin.
O
O
So
you
could
put
these
bars
very
high
and
support
the
lowest
probability
like
this.
But
then
you
have
a
very
small
alpha
and
beta
factors
which
will
be
very
sluggish
when
your
probability
or
your
load
is
very
high.
So
next
slide.
The
solution
that
in
pie
was
used.
Bike
enhance
is
to
scale
the
alpha
and
beta
factors
and
to
jump
based
on
a
probability
from
from
one
to
the
other
curve
and,
as
you
can
see
in
the
current
Linux
implementation,
there
are
three
of
these
steps
to
go
up,
but
to
support
lower
probabilities.
O
So
actually
these
are,
you
could
say,
extra
internal
alpha
and
beta
parameters
which
are
need
to
be
configured
on
top
of
your
normal
basic
alpha
and
beta
factors.
Maybe
next
slide
so
in
the
pi
square
solution
that
we
propose.
We
just
remove
the
square
root
by
actually
not
controlling
p,
but
a
p
prime,
which
is
a
kind
of
virtual
probability
which
is
then
later
squared
so
in
the
squaring.
So
you
don't
need
the
scaling
hear
you.
O
You
just
use
two
random
numbers
which
are
actually
off
the
resolution,
because
your
P
changes
in
a
much
smaller
space.
You
take
the
max
one
of
both
and
compare
that
with
you
with
your
R
square
here,
your
this
probability.
So
actually
what
happens
is
that
you're
here
you're
count
controlling
the
square
root
of
the
probability,
because
it's
weird
afterwards
so
actually
you're.
Your
sender
becomes
proportional
to
1
over
P
P
prime.
O
O
So
we
did
the
same
for
TCP
Reno
on
pi
square,
and
the
difference
is
that
we
just
put
the
square
here
around
p,
prime,
because
we're
controlling
the
prime
here
and
also
for
skeletal
tcp.
The
scalable
tcp,
is
just
instead
of
reducing
half
a
window,
because
this
part
is
the
window
/
2.
In
both
cases,
we
have
only
half
a
packet
so
furmark.
Instead
of
having
the
window,
we
only
reduce
by
half
a
packet,
because
these
terms
are
actually
the
the
Q
signs
at
that
moment
times.
O
Okay,
so
as
a
result,
if
you
look
at
the
next
slide,
if
we
compare
what
is
done
with
with
a
normal
by
you
see
that
the
gain
margin
is
in
steps,
but
if
you
how
the
game
evolves
for
the
P
prime,
it's
a
flat
line,
so
you
don't
need
to
compensate
at
all.
So
you
can
put
your
your
gain.
Your
alpha
and
beta
factors
only
once
and
they
are
universally
applicable
for
the
full
range.
Let's
say
of
your
control,
so
it
simplifies
a
lot
also.
We
saw
that
you
can
have
a
higher
gain
factor.
C
O
This
so
it's
a
linux
implementation.
We
also
have
a
dual
q
option
for
that,
but
in
this
case
in
these
experiments
you
used
a
single
queue
where
we
use
also
the
rate
estimation
and
the
queue
delay
of
the
pie
announced.
There
were
a
lot
of
in
the
implementation
other
heuristics,
which
were
Abbott
there.
We
just
remove.
C
O
O
If
it's
easy
to
0,
it
can
be
marked
in
the
classic
way.
So
with
the
square
probability,
and
if
it's
easy
t
1
it's
supposed
to
be
an
scalable
tcp
which
doesn't
need
the
marking
so
which
already
responds
in
a
scalable
way
to
do
marking
so
also
much
more
and
an
example
of
that
is
data
center
tcp.
And
this
in
our
experiments.
We
used
here
normal
cubic.
O
C
O
C
O
Then
it's
has
this
equation
and
if
you
combine
them,
if
you
want
em
rates
to
be
the
same,
you
can
have
this
approximated
and
we
so
the
the
probability
we
apply
to
data
center
pcp
we
divide
by
two
and
we
square
it,
and
then
we
apply
this
to
the
classic
dtp
and
that
works
pretty
well.
We
did
some
experiments,
45
200
milliseconds
round
trip
times
and
from
fourth
to
400
megabytes
throughput
links
next
slide.
O
O
So
this
is
a
figure
eight.
We
read
that
the
tests
and
the
red
ones
or
the
pie,
control
pie,
enhanced
and
the
blue
lines
are
the
pie
square.
So
for
cubic
you
see
that
pi
square
work
similar,
maybe
a
little
bit
less
overshoot
here,
because
it
can
have
big
parameters,
but
for
data
center
tcp
you
see
that
first
of
all
x
is
not
adapted
to
it,
so
it's
unstable
in
between,
but
because
we
don't
apply
the
square
here
and
what
don't
do
the
tuning
and
we
use
the
correct
parameters
for
them.
O
O
O
You
have
to
note
that
this
is
actually
the
delay
and
so
the
queue
delay
measured
as
an
average
over
one
second,
so
these
peaks
in
reality
or
a
bit
bigger
but
much
shorter.
So
that's
why
they
are
over
one
second,
average
much
shorter
okay
for
the
trip
with
ratio.
Here
we
compared
also
pi
and
pi
square.
O
C
O
That
patients,
because
we
only
test
for
250
seconds
with
two
flows
of
one
of
each
this-
is
which
pi
square.
So
of
course,
we
assume
that,
under
normal
conditions,
easy
and
animation
get
the
same
throughput.
If
we
do
is
data
center
tcp
here
and
we
use
I
square
with
the
scalable,
so
the
non
square
probability
you
see,
it's
also
pretty
good
compatible
with
the
other
ones,
except
here
in
this
when
cubic
mode
starts
to
kick
in
because
the
round
trip
times
are
very
big.
O
These
are
the
queuing
delays
and
they
are
always.
These
are
older
results
with
alpha
and
beta
the
standard,
pi
parameters,
so
that
gains
can
be
bigger.
So
that's
why
the
variations
here
are
a
bit
more
sluggish
and
bigger,
so
the
queue
delay
here
is
a
target
is
20
milliseconds
we
run
again
compared
PI
and
PI,
squared
the
red
ones
and
the
purple
ones
are
easy
n,
which
is
ian
with
cubic,
and
here
the
red
and
the
blue.
This
is
cubic
with
data
center
tcp
running
together.
O
So
you
see
the
queue
sizes
are
on
average
around
the
target
in
pi
it's
more
the
peak
which
are
limited
yeah,
it's
probably
because
of
the
parameters.
We
were
configured
a
little
bit
differently.
Also,
we
need
still
to
do
all
these
deaths,
which
with
much
higher
game
parameters
to
to
see,
but
it's
a
lot
of
work,
because
it's
are
a
lot
of
experiments
next
slide.
O
So
just
to
tell
you
that
dual
q
is
actually
our
deployment
goal
so
because
alvarez
smoothing
in
the
end
system
and
a
dual
queue
with
immediate
network
marking
is
outperforming
any
smooth
aqm
such
as
pi
n
pi
square.
Also
so
they
smooth
in
the
network
but
pi
square
is
useful
to
control
classic
q,
sighs
it's
simpler
and
it's
automatically
tuning.
So
that's
a
good
thing
and
its
signal
can
also
be
used
for
scalable
tcp.
So
that's
also
big
advantage.
O
B
At
first
10
what
happens
to
the
blue
sheets,
whereas
to
blow
she's
anybody's
got
the
captain
will
not
signed
okay
they're
here,
I
see
you
haven't,
signed
them,
signed
them
and
ask
Cohen
questions
if
you
wish,
but
I
think
I'll
formally
ended
us
talking
to
the
mic.
If
you
want
just
go
on
cussing,
if
you
want
but
I
think
here,
but
yeah
the
last
session.
Sorry
this
distler,
we
have
a
plan
to
have
a
discussion
about
you,
wanna
say.