►
From YouTube: IETF115-ICCRG-20221108-1300
Description
ICCRG meeting session at IETF115
2022/11/08 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
A
B
C
D
D
E
Welcome
this
is
ICC
RG.
If
you
don't
know
what
they
said,
that
what
that
is
you're
on
the
wrong
room.
E
E
C
It's
like
this
opportunity
to
come
stand
at
the
front,
so
hi
her
I'm
Colin,
Perkins
I'm,
the
irtf
chair
I,
just
wanted
to,
as
some
of
you
may
have
seen.
We've
we've
had
a
change
of
the
chairs
for
This
research
group
in
the
last
couple
of
months,
so
I
wanted
to.
Firstly
thank
Jonah,
who
has
been
sharing
this
group
for
I.
C
Don't
know
how
many
years
very
many
years,
some
years,
some
years
yes
and
has
been
doing
an
excellent
job,
cheering
it's
we
we
figured
after
a
decade
or
so
we
should
perhaps
give
him
some
help
so
I'm
very
pleased
to
welcome
Simone
Ferlin
and
Michael
Shapiro,
both
of
whom
are
remotes
this
time.
Unfortunately,
they're
not
here
in
person,
but
they're,
joining
generous
chairs
and
so
I
just
wanted
to
thank
Janna
for
his
service,
thanks,
Simone
and
Michael.
C
For
for
being
willing
to
take
this
up
and
I
look
forward
to
the
reinvigorated
group
and
there's
a
some
of
some
of
you
will
have
seen.
There's
a
chatter
revision
going
on
trying
to
come
up
with
the
ideas
for
the
next
few
years
of
the
group
and
I
look
forward
to
the
research
to
come.
E
Thank
you,
Colin
I
want
to
say
that
it's
wonderful
to
have
folks,
joining
in
and
and
and
and
picking
up
the
chatting
I
want
to
ask
Simone
if
she
wanted
to
say
a
couple
of
words,
foreign.
F
I
just
want
to
say
hi
for
those
in
the
room
that
I
know
from
the
past.
And
yes,
please
read
and
comment
on
the
charter
that
I
circulated
on
the
list
yesterday
and
I
think
we
are
ready
to
get
started.
E
I
just
want
to
say
that
it's
wonderful
to
have
more
chairs
more
people
to
help
support
what
we
are
trying
to
do
and
we're
going
to
start
with
doing
a
re-charter
of
the
research
group,
especially
in
light
of
the
new
work.
That's
happening
in
the
ietf
around
the
condition,
control
working
group
and
so
on.
We
thought
it
was
a
good
opportunity
and
time
to
think
about
exactly
what
it
is
that
we
want
to
do
in
the
research
group,
so
yeah
focused
on
that.
E
We
you'll
definitely
hear
more
about
that.
Please
stay
engaged
on
the
list
and
elsewhere.
As
you
see,
opportunities
to
chime
in,
we
would
love
your
input
and
we
look
forward
to
it.
E
E
So
we
want
all
of
those
folks
to
be
engaged
so
look
forward
to
your
contributions
going
forward
and
please
chime
in
on
the
charter.
When
you
see
it
but
Simone,
what
do
we
have
today.
F
F
G
H
Cool
hello,
everybody
and
sorry
for
the
slightly
silly
title,
but
the
point
is
that
this
is
a
bit
more.
You
know
there
is
a
bit
more
of
a
point
to
it
than
it
may
seem.
Okay,
next
slide,
you
know,
first
of
all,
the
reason
I
do
this.
The
reason
I
present
this
is
that
in
the
world
I
am
from,
which
is
the
world
of
Academia.
H
H
The
results,
the
other
side
of
it
that
when
I
talked
to
my
13
year
old
daughter
about
that
I'm,
improving
the
speed
of
the
internet,
she
says
why
and
I
have
no
good
answer,
because
it
just
works
well
enough
for
her.
It
works
well
enough
for
many
things
so
I'm
trying
to
do.
You
know
to
see
more
of
a
point
of
what
I'm
doing
and
had
this
idea.
Probably
if
you
do
congestions
all
right,
there
is
some
game
to
be
made
in
terms
of
energy
saving,
and
it
turns
out
that
this
is
true.
H
H
You
know
CO2
emissions
and
so
forth
by
having
Meetings
online,
but
it
does
also
contribute
it's
a
significant
number.
The
number
is
very
hard
to
get
because
what
is
the
internet?
You
know
there
are
many
many
studies.
There
are
some
pretty
serious
studies
but
they're,
also
old,
there's
a
hot
Nets
paper
from
I
think
2011
or
12..
But
if
you
look
through
these
studies-
and
you
try
to
get
an
idea
turns
out
that
the
contribution
to
greenhouse
gases
is
maybe
in
the
range
of
0.5
percent
to
1.2
percent.
H
The
aviation
industry
appears
to
be
in
the
range
of
3.5.
So
it's
something
like
a
third
to
a
seventh,
a
sixth,
a
fifth
something
like
that
of
the
aviation
industry.
Depending
on
what
you
look
at,
depending
on
which
study
you
look
at
so
it's
you
know,
it's
a
significant
amount
still,
and
what
is
interesting
about
it.
I
think
is
that
if
you
would
think
of,
let's
go
out
and
change
all
the
aviation
industry,
that's
something
that
is
basically
impossible
to
do,
whereas
here
we're
doing
standards
for
the
whole
internet.
H
When
we
make
a
big
change
to
something
it
gets
rolled
out
into
our
asses,
we
can
actually
change
the
world
right
in
terms
of
the
internet,
so
there
is
a
big
mileage
here,
I
think,
and
this
is
why
I
think
this
is
interesting.
Another
side
note
to
consider
something:
I
took
from
looking
at
these
studies
of
energy
saving
energy
consumption
for
the
internet
and
also
the
way
that
Energy
Efficiency
is
evolving.
H
H
Please!
Okay,
today
often
commonly
energy
saving
is
regarded
as
a
performance
trade-off
right,
smartphones
expose
it
to
us
as
something
that
we
can
turn
on,
and
usually
people
turn
it
on
as
a
way
to
save
energy
when
the
battery
runs
low.
But
not
just
for
the
sake
of
the
planet,
because
it's
it's
regarded
as
a
trade-off
right.
H
If
you
look
I
mean
I
found
this
explanation
from
the
Microsoft
page
about
some
laptop
going
into
power
saving
mode
depending
on
whether
it's
on
power
or
unplugged
or
not
right,
so
it
is
often
regarded
as
a
trade-off
of
my
point
here
is
that
it
doesn't
need
to
be
not
not.
Everything
here
is
a
trade-off.
Next.
H
Yeah,
so
that's
my
main
point,
and
the
key
thing
here
really
is
that
if
we
reduce
the
flow
completion
time,
which
is
something
we
all
want
in
this
room,
I
hope,
then
it
means
better
performance
and
it
means
less
energy,
and
that's
very
true,
very
simply
because
we
got
you
know
more
sleep
time
longer
sleep
periods
that
can
be
done
with
better
internet
congestion,
controls,
I
I'm,
showing
you
a
simple
strongman
example
for
this
in
the
next
slides.
Please
next.
H
So
I
wrote
this
short
paper
which
contains
the
detail
of
the
details
of
of
the
test
that
I'm
gonna
explain
briefly
here.
If
you
want
to
know
the
details,
it's
in
there
I
looked
at
Wi-Fi
I
just
looked
at.
Can
we
well
how
much
energy
will
be
saved
by
cutting
the
flow
completion
time
again?
The
point
is
to
make
it
very
very
simple
example:
I'm
not
proposing
to
increase
the
initial
window.
It's
based
on
just
the
initial
window
for
the
sake
of
a
simple
case
right
of
saying:
here's,
a
straw
man.
H
So,
first
of
all,
in
Wi-Fi
there
are
quite
a
number
of
schemes
that
put
devices
to
sleep.
It
gets
really
complex
with
802.11
ax
that
that
offers
quite
a
number
of
things,
but
well
it
turns
out
that
the
most
commonly
implemented
scheme
is
a
pretty
old
power,
saving
mode
from
the
old
80211
standard,
which
basically
puts
your
device
to
sleep
after
200
milliseconds
of
not
seeing
any
activity.
Now
there
are
phases
of
trying
to
see
if
there's
no
data,
there
are
also
different
states
in
the
whole
thing.
H
So
it's
a
bit
more
complex,
but
I
think
it's
a
reasonable
approximation
to
think
of
it
like
that,
it's
basically
putting
everything
everything
to
sleep
after
200
milliseconds
there's
a
relatively
recent
paper
from
infocommer
2021,
where
the
authors
looked
at
some
Modern
smartphones
and
they
found
the
same
behavior
despite
these
smartphones
actually
being
compatible
to
either
211
ad,
which
also
supports
much
more
sophisticated
ways
of
doing
it.
So
it
seems
that
the
most
commonly
implemented
thing
today
still
is
basically
sleeping
after
200
milliseconds
of
inactivity.
H
Now
that
plays
a
role
when
transfers
are
short
when
I
have
a
very
long
transfer
than
200
milliseconds
in
relation
to
the
transfer
becomes
small
right
when
transfers
are
short
well.
Actually,
the
majority
of
internet
traffic
is
like
that
packet
loss
is
relatively
rare,
transfers,
often
terminating
slow
start,
or
we
have
app
limited
periods,
so
we
have
bursts
and
nothing
very
often,
and
under
these
circumstances
the
flow
completion
time
becomes
a
function
of
the
round
trip.
H
So
here's
here's!
What
I
did
the
point
being
that
you
know
we
have
this
way
of
sending
data
which
is
not
saturating
the
window
and
then
on
the
left,
and
you
have
a
way
which
saturates
the
window
on
the
right
which
is
shorter.
So
that
is
the
point
list
is:
is
shorter,
it's
faster
and
it's
also
better
in
terms
of
energy.
H
What
I
did
is
I
ran
a
local
test
in
the
test
bed,
a
very
simple
short
data
transfer
of
10
packets,
80
packets,
10
on
the
left
80
on
the
right.
It's
a
wired
test
pad
so
I
just
ran
this
with
different
initial
window.
H
Values
got
the
pcap
choice
and
then
there's
a
tool
called
Energy
box
that
people
I
think
from
Leo
University
have
developed,
and
that
has
been
used
also
by
Spotify
to
evaluate
the
Energy
savings
of
of
the
Energy
Efficiency
of
the
mobile
app
phone
mobile,
app
mobile
application,
and
this
is
using
this
old
200
milliseconds
model.
But
as
I
just
explained,
it
probably
still
applies,
and
what
you
can
do
with
that
tool
is
that
you
can
put
the
pcap
file
into
it
and
it
splits
Out
tool.
H
You
know
it
tells
you
how
much,
how
much
energy
that
was
spent
and
what
you
find
here
is
that
the
savings
get
quite
significant
right.
I
mean
if
you
look
at
it,
for
instance,
with
large
rtts,
obviously
right
when
we're
talking
about
such
short
transfers,
the
rtt
plays
roles
with
large
rtts.
You
can
save
up
to
something
like
a
third
of
energy.
Now
this
is
just
going
familiarism
into
two
to
ten.
H
H
Okay,
that's
already
it.
My
point
is
not
to
say
we
should
increase
the
initial
window,
but
that
there
is
going
to
be
made
in
terms
of
Energy
Efficiency
when
we
cut
the
flow
completion
time
and
it
can
be
significant,
there
can
be
plenty
of
ways
of
doing
this
right.
For
instance,
I
came
across
a
paper
that
uses
reinforcement,
learning
to
update
the
initial
window
value
over
time.
You
could
be
doing
couple
congestion,
control
running
from
past
history,
using
ongoing
connections
using
proxies,
which
is
something
I'm.
H
A
big
fan
of
there
are
various
ways
of
doing
things
better,
but
you
know
reduce
the
number
of
X
that
we're
sending
when
we
don't
need
to,
for
instance,
congestion
control,
I
think
really
didn't
happen
because
it
didn't
matter
so
much,
but
now
maybe
it
matters
because
X
actually
produce
energy
or
waste
energy.
It's
all
together.
You
know
it's
an
important
topic.
It's
growing
in
importance,
I
think
it's
probably
it's
also
interesting
because
at
least
in
my
world
I
see
a
decline
of
the
importance
of
just
performance
Improvement.
H
You
know
my
academic
trying
to
get
money
world.
Where
was
energy?
Saving
is
growing
in
importance
and
I
think
this
is
an
opportunity
to
combine
the
two
things
to
actually.
You
know
make
gains
in
both
on
both
sides,
but
it's
worth
measuring
as
well
right
I'm,
looking
at
that
as
a
metric,
basically
and
I,
think
that's
everything.
I
Google
thanks.
This
is
interesting
for
sure
and
I
wonder
how
you
have
this
changed,
so
I
mean
one
way
of
thinking.
Congestion
control
course
is
kind
of
a
trade-off
between
people
can.
This
is
maybe
an
accurate
way
of
saying
it,
but
a
trade-off
between
sort
of
speed
and
unnecessary
re-transmission
in
some
respects.
At.
H
I'm
talking
about
photos
on
completion,
time
I,
because
I
think
that
most
congestion
results
today
are
pretty
much
try
to
optimize
to
avoid
unnecessary
Transmissions.
Anyway,
you
could
try
and
above
you
know,
optimize
towards
that
even
more,
but
there's
probably
more
gain
if
you
start
doing,
let's
say
a
congestion
control
well,.
I
Away,
you
have
slow
start
just
completely
blow
through
the
buffers
in
a
way
that
that
would
result
in
that.
H
G
E
B
Do
you
is
it
working
yeah?
Okay?
So
just
one
question
to
play
diverse
advocate
you,
you
are
mentioning
that
processing
transaction
faster
serves
energy
because
you
sit
on
the
radio
for
short
of
time.
That's
nice,
but
don't
we
have
a
feedback
loop
there,
that
processing
transaction
faster
gets
you
to
perform
more
transactions.
J
I
B
Yes,
because
I
mean
we
have
seen
the
same
thing
with
cars,
I
mean
that
that
shipping
car
faster,
if
they
stop
less
often
Etc
double
unless
you
so
you
make
bigger
highways
and
and
if
you
make
bigger
highways,
you
said
hey.
If
I
had
the
same
traffic
I
had
five
years
ago,
it
will
process
much
faster
and
I
would
save
energy.
E
D
Yes,
Jordi
Ross
from
Qualcomm,
yeah
and
I'm
gonna
also
have
this
conversation
connects
with
efforts
like
coinergine,
an
idea
of
adding
some
competition
at
the
core
at
the
switch
to
help
congestion
control,
providing
explicit
feedback
from
from
the
from
a
from
a
network
element
and
then
with
the
idea
that
you
know,
if
you
look
at
what
TCP
congestion
control
is
trying
to
do
with
the
end-to-end
design
is
basically
trying
to
do
sort
of
like
a
gradient
descent,
and
using
this
whole
network
to
to
arbitration
is
taking
one
round
in
time
to
get
the
feedback,
whereas
you
could
actually
do
some
calculations
at
the
core
of
the
network
and
instead
of
using
the
rtt,
you
can
actually
use
these
calculations
in
network
and
then
provide
much
faster
feedback.
D
So
any
thoughts
on
you
know
if
we
really
serious
about
sort
of
kind
of
producing
energy
consumption,
on
opening
up
the
box,
maybe
and
and
adding
some
intelligence
at
the
core
to
actually
reduce
conversion
style
of
TCP,
which
is,
as
we
know,
it's
I.
H
H
E
Have
I
I
put
myself
in
in
line
and
then
I
pulled
myself
out,
because
I
will
ask
you
separately
because
we
do
need
to
move
on
to
the
next
one.
Thank
you
Michael
yeah.
It
was
definitely
a
very
different
sort
of
talk
and
it's
going
to
be
interesting
to
think
about
what
we
can
do
next
year.
Thanks.
E
All
right
now,
moving
on
to
the
next
venkat
Arun
is
here
and
he
is
on
video,
so
hey
venget!
E
Thank
you
for
for
joining
I'm
gonna,
pull
your
slides
up
and
I'll
I'm
gonna
be
running
your
slides
from
here.
So
just
call
out
whenever
you
want
to.
Let
me
briefly
introduce
venkat
is
a
graduate
student
at
MIT
and
working
on
condition.
Control
he's
been
doing
a
number
of
things
and
I
he's
he's
done.
Other
controllers
in
the
past
as
well.
E
Copa
is
probably
the
most
well-known
of
those
but
I
I
want
to
give
it
to
you
when
you
introduce
yourself
and
get
it
going
and
to
everybody
else.
Please
be
kind,
be
gentle,
but
not
too
gentle
thanks,
man
good!
Take
it
away.
K
Hi
Janna,
thanks
for
the
introduction,
I
hope
you
can
hear
me
I'm
sorry
for
the
confusion.
The
daylight
savings
time
threw
me
off
so
I've
been
working
on
congestion
control
and
now
I've
started
using
trying
to
figure
out
how
we
can
formalize
the
area
of
congestion,
control
and
other
heuristics
used
in
networks
in
ways
that
are
practically
meaningful.
K
K
In
the
modern
internet,
users
have
want
interactive,
low,
latency
applications,
but
are
loss
based
condition,
control,
algorithms,
don't
bound
delay,
and
therefore
people
have
developed
all
of
these
delay.
Bounding
condition,
control,
algorithms,
including
the
famous
bbr,
and
they
use
a
wide
variety
of
methods.
Some
use,
queuing,
delay,
others
use
receive
rate,
and
yet
others
are
learning
based
and
despite
the
vast
variety
of
techniques
they
used,
they
all
share
this
one
common
property
next
slide
that
they're
all
delay,
convergent
and
so
next
slide.
So
what
do
we
mean
by
this
one,
common
property?
K
So
suppose
we
run
the
algorithm
on
an
Ideal
link?
Some,
we
pick
some
constant
link
rates,
some
constant
delay
and
we
plot
the
delay
experience
by
the
packets
and
it
could
look
something
like
the
cartoon
diagram
here
now
we
call
an
algorithm
delay
convergent
if,
after
a
certain
convergence
time,
the
delay
variation
is
small
and
what
we
found
to
a
surprise
that,
even
though
this
looks
perfectly
innocuous
and
a
very
natural
thing
to
do,
this
is
bad.
It
can
lead
to
starvation
where
one
flow
dominates.
K
All
of
the
band
when
multiple
flows
are
sharing
the
same
bottleneck
queue.
One
flow
dominates
at
the
cost
of
all
the
others.
Next
slide,
please
so
this
happens
because
an
end-to-end
CCA
can
really
only
measure
the
end-to-end
delay
and
we
can
write
the
total
end-to-end
rtt.
As
the
sum
of
three
components.
First
is
the
constant
propagation
delay.
K
Then
there
is
delay
due
to
queuing
at
the
bottleneck.
That's
the
congestive
delay
and
there
are
every
other
source
of
delay
variation.
We
call
it
non-congestive
delay
and
the
problem
is
from
end
to
end
measurements
alone.
It's
very
hard
to
distinguish
between
these
two
types
of
delays,
and
that
is
the
root
of
the
problem,
and
that's
why
delay
convergent
algorithms
are
bad
next
slide.
K
K
End
hosts
in
order
to
save
on
CPU
costs
again
send
packets
in
bursts
and
operating
systems.
Schedulers
have
their
own
quirks
based
on.
You
know,
because
they're
not
only
for
optimizing
for
package
scheduling,
they're,
also
optimizing
for
every
other
process
running
in
the
in
the
system
and
the
problem
is
One
path
can
have
multiple
of
these.
So
even
if
we
manage
to
model
one
of
these
phenomena
and
account
for
it
modeling
their
combination
is
much
much
harder
next
slide.
K
So
what
does
this
non-congestive
delay
look
like,
so
this
is
an
example
of
the
delays
experienced
by
packets
between
a
seller
node
in
Stanford
and
AWS
California.
This
is
from
the
pantheon
project
and
if
we
zoom
in
next
slide
to
you
know
a
small
section
of
next
of
this,
this
picture
we
find
that
the
delay
is
kind
of
complex
and
note
that
this
is
just
14
milliseconds
on
the
x-axis,
which
is
like
10
times
smaller
than
the
RTD.
K
Next
slide,
so
there
are
animations
yeah.
If
you
pan
to
the
left,
we
see
that
there
is
another
type
of
non-congestive
delay
variation,
and
this
is
not
act.
Aggregation
and
honestly,
I,
don't
even
know
what
this
is
and
if
you
know,
if
we
as
humans,
can't
figure
out
what
sources
of
delay
variation
are
it's
really
hard
for
an
automated
thing
to
do
so,
and
it's
not
just
cellular
networks.
K
The
next
slide,
if
you
look
at
ethernet
link,
so
this
is
the
cleanest
link
we
could
find
in
the
pantheon
data
set,
and
so
here
you
know
some
of
the
delay
variation
is
because
of
congestion.
So
the
sudden
drops
in
delay
are
because
of
bbr's
rtt
probe,
but
if
we
zoom
into
one
of
the
constant
parts
next
slide,
we
see
that
the
delay
variation
here
is
smaller,
but
it's
just
as
complex
and
significant.
K
So
in
cellular
it
was
around
10
milliseconds,
in
this
case
it's
about
two
milliseconds,
but
we
have
very
complex
delay
variation
and
it's
hard
to
distinguish
how
much
of
this
delay
is
because
of
congestion,
and
how
much
of
this
is
without
congestion.
K
Now
the
question
is:
does
this
matter?
So
let's
see
how
this
could
matter
so
Suppose.
There
are
two
floors
sharing
a
common
bottleneck
and
say
one
of
them
behind
Wi-Fi
and
the
other
is
using
ethernet
now,
because
the
non-congest,
some
of
the
path
of
these
two
flows
is
different.
They're,
going
to
estimate
the
non-congestion
they're
going
to
experience
different
non-congestive
delay
and
because
all
they
can
really
measure
is
congestive
plus
non-congestive
delay,
they're
going
to
estimate
the
congestive
component
of
the
delay
differently.
K
So
one
person
thinks
it's
20
milliseconds,
the
other
person
thinks
it's
five
milliseconds,
and
so
the
ground
truth
is
actually
the
queuing
tell
is
five
milliseconds,
and
this
is
going
to
mean
that
there's
going
to
be
some
unfairness,
but
that's
not
the
surprising
part.
You've
accepted
some
amount
of
unfairness
for
a
long
time.
The
surprising
part
is
that
the
unfairness
can
be
arbitrarily
large.
So
the
next
question
is:
can
we
just
correctly
estimate
the
congestive
component
of
the
delay?
K
But
the
problem
is:
every
people
have
tried
a
lot
of
these
estimators
and
every
one
of
the
estimators
we
tried
is:
has
failure
modes
in
and
realistic
failure
modes
and
the
problem
is
the
Internet
is
just
too
complicated
and
it's
you
know
the
the
non-conditional
delay
component
is
complex.
Next
slide:
okay,
so
I'm
going
to
Define
what
I
mean
by
starvation
and
then
show
you
that
every
every
delay
based
algorithm
we've
designed
thus
far
suffers
from
this
problem.
K
So
the
first
Criterion
is
that
the,
when
two
of
us
share
a
bottleneck,
the
ratio
of
throughputs
that
they
obtain
is
arbitrarily
large
and
we
are
not
just
interested
in
transient
phenomenon.
For
instance,
if
flow
a
has
been
running
for
a
long
time
and
flow
B
just
starts
off,
then
of
course
flow
B
is
going
to
for
a
little
while
get
lower
throughput,
but
we
are
only
interested
in
cases
where
it
remains
that
way
forever,
because
that's
much
worse
next
slide.
K
Okay,
so
before
I
make
the
general
claim,
let
me
show
how
this
can
happen
in
a
family
of
schemes
Vegas
fast
and
gopa,
even
though
these
three
algorithms
have
very
different
Dynamics.
They
have
the
same
equilibrium
Behavior,
where
the
sending
rate
is
inversely
proportional
to
the
queuing
delay.
And
if
we
look
at
some
of
the
numbers
on
the
x-axis
for
20
milliseconds
queuing
delay,
it
will
send
at
one
megabit
per
second.
K
If
it
sees
two
milliseconds
roughly,
then
it's
in
a
10
megabits
per
second
and
0.2
milliseconds,
it
will
send
at
100
megahertz
per
second,
and
here
you
might
see
the
problem
with
just
the
difference
between
0.2
and
2
milliseconds
of
queuing,
the
estimated
it
sends
at
a
rate,
that's
10x.
You
know
different
between
10
megabits
to
100
megahertz
per
second
and,
as
we
saw
in
the
graphs
before
it's
very
easy
for
there
to
be
non-congestive
delay.
K
That's
you
know
two
milliseconds
large
and
therefore
just
because
two
flows
are
sharing,
the
same
bottleneck
does
not
mean
that
they
are
estimating
the
same
congestive
of
the
length
they
could
be
sending
at
arbitrarily
different
rates
or
next
slide.
But
is
this
just
a
problem
with
Vegas
this
family
of
algorithms?
Could
we
do
better,
and
this
is
where
the
original
notion
of
delay
convergence
I
defined
came
in,
which
is
all
of
these
algorithms
share
the
same
problem?
K
So
what
so?
What
the
theorem
improve
is
that
if
an
algorithm
is
delay
convergent,
then
we
can
always
and
suppose
it's
delay
convergent
with
delay
variation
Delta.
So,
after
the
convergence
time
the
delay
variation
remains
within
Delta,
then
the
theorem
is.
We
can
always
construct
non-transitive
delay
variations
that
is
smaller
than
a
capital.
D
such
that
starvation
occurs
and
D
is,
you
know,
any
number
greater
than
twice
Delta,
so
the
LA,
the
smaller
the
Delta,
the
easier
it
is
to
make
it
start.
K
So
the
more
delay
conversant
it
is
the
more
susceptible
it
is
to
starvation,
and
for
many
ccas
Delta
can
be
arbitrarily
small
or
even
zero.
So
if
you
only
need
an
extremely
tiny
amount
of
not
congestive
delay
for
this
problem
to
occur
next
slide.
K
So
I
mean
I'm
not
going
to
prove
the
entire
theorem
I'm
just
going
to
give
the
intuition
so
the
little
vertical
lines
so
I
run
the
congestion
control
algorithm
on
ideal
links,
with
different
link
rates
and
for
each
link
rate,
I
plot,
a
little
vertical
line,
which
shows
the
range
of
delays
to
which
the
delay
conversion
algorithm
converges.
And
we
know
from
the
definition
of
delay,
conversions
that
it's
going
to
be
smaller
than
some
Delta.
K
K
So
why
is
that
a
problem?
Well,
it's
a
problem,
because
we
have
two
very
different
link
rates
that
have
the
same
very
Sim.
That
are
experience
very
similarly,
so
go
on
next
slide,
so
we
can
simply
construct
non-congester
delay
next
slide
such
that
we
add
different
delays
to
each
each
flow,
and
you
know
the
two
flows
think
the
link
rates
are
that
different
next
slide.
Please.
K
So
if
you
take
a
look
at
bbr,
BBN
is
a
complex
algorithm,
as
many
of
you
are
aware,
but
the
only
thing
that
you
need
to
know
for
this
talk
is
that
if
there
is
some
Jitter
in
the
network,
any
an
even
very
small
amount
of
any
type
of
Jitter
Works
bbr
will
maintain
a
queuing
delay.
That's
equal
to
its
propagation
delay.
K
That's
because
it
should
be
C
when
cap
for
those
of
you
who
know
how
bbr
works
and
the
consequence
of
that
is
that
if
the
propagation
delays
for
two
flows
are
different,
they're
trying
to
maintain
very
different
cues
at
the
bottleneck
and
the
the
flow
that
is
trying
to
maintain
the
smaller
queue
is
going
to
start.
K
So
this
is
an
example
of
running
the
kernel
version
of
dbr
on
an
emulated
link
with
120
megabits
per
second,
and
one
flow
has
40
milliseconds
propagation
delay
and
the
other
has
18
milliseconds
propagation
delay,
and
we
find
that
the
one
with
the
smaller
delay
has
like
less
than
10x
the
throughput
of
the
other
flow
next
slide,
and
you
know
we
discussed
how
Vegas
fast
and
Copa
start,
and
we
did
a
similar
experiment
where
it's
a
60,
millisecond
propagation
delay.
K
This
time
and
just
one
packet
gets
a
delay
of
59
milliseconds
and
that
gets
it
to
underestimate
its
minimum
rtt
and
therefore
send
it
again
more
than
10x
different
link
rates,
and
this
is
something
I
observe.
This
is
a
problem
I
observed
in
Copa
before
sort
of
starting
on
this
project,
on
figuring
out
how
to
fix
fairness
and
then
finding
out
that
we
really
can't
next
slide.
K
So
I
said
that
delay,
convergent,
algorithms,
don't
work
so
could
oscillate
desperately
oscillating
delay
help
well,
it
could.
So.
The
reason
is
the
ambiguity
in
estimating
the
congestive
component
of
delay
effectively
discretizes
it
and
the
delay
convergent
algorithm
is
going
to
measure
the
same
sort
of
discretized
delay
every
time
next
slide.
K
So
in
conclusion,
the
ways
to
overcome
this
problem
could
be
to
find
a
different
style
of
CCA
design.
That
deliberately
oscillates
the
delay
by
large
amounts
or
designed
for
finite
link
rates,
and
in
the
paper
we
can,
we
tell
you
have
an
idea
for
how
to
make
that
better
than
Current,
Designs
or
use
in
network
support
like
ecn.
In
fact,
and
with
that
I
can
show
my.
J
They
are
rolling,
bless
kit,
thanks
for
the
presentation,
quite
interesting.
We
we
designed
TCP
Lola,
which
hasn't
been
tested
with
large
link
or
Links
of
varying
link
rates.
So
it
may
not
work
very
well
with
that,
but
it
has
some
kind
of
explicit
fairness
mechanism
built
in
in
order
to
try
to
achieve
fairness,
and
it
does
is
by
similar
to
to-
let's
say
water-
filling
algorithm,
which
does
not
I
guess
not
belong
to
your
sending
rate
in
various
delay,
queuing
array
equation.
So
maybe
it
does
not
apply
to
your
to
your
theory.
J
I
have
to
think
about
that.
My
other
observation
is
maybe
it's
do
you
think
it
would
help
I
mean
we
experimented
with
that.
Having
explicit
information
from
the
the
intermediate
nodes
on
on
queuing
delay.
K
K
Yeah,
so
the
answer
to
your
second
point
is:
yes,
I
think.
If
we
have
access
to
explicit
queuing
delay,
then
we
can
have
almost
perfect
condition.
Control
I
mean-
and
we've
known
this
for
a
long
time
right,
both
with
xcp
and
RCP
I,
think
both
are
great
algorithms.
If
we,
if
we
could
get
them
and
as
for
your
first
point,
I
think
the
so
when
you're
analyzing
Lola,
the
thing
you
need
to
look
at
is,
if
you
run
it
on
an
Ideal
link,
the
perfect
mathematically,
perfect
link
or
any
other
link.
K
Does
the
delay
converge
to
some
constant
or
you
know,
small
range
of
values?
If
it
does,
then
it's
going
to
suffer
from
this
problem,
so
even
if
it
does
not
follow
the
Vegas
fast
Copa
line
like
bbr,
convert
this
to
a
small
set
of
delays.
If
the
network
has
a
little
bit
of
Jitter
in
it
because
it
will
just
be
C
when
cap,
so
that
is
The
Telltale
sign
to
look
for.
J
D
E
Has
a
question
on
chat,
I'll
read
out:
do
you
differentiate
between
self-inflicted
delay,
variation
versus
delay,
variation
caused
by
a
competing
flow
would
a
delay
conversion
CCS
staff,
even
while
competing
with
a
non-delay
convergent,
CCA.
K
So
we
only
looked
at
the
case
where
the
same
CCA
is
competing
against
itself,
because
that's
the
simpler
case.
So
since
we
found
problems
there,
we
just
assumed
that
if
there
are
two
different
CCS,
then
that
problem
is
just
going
to
be
harder.
So
so
we
have
not
looked
at
that
explicitly
but
I.
Imagine
that's
much
much
harder.
E
Yeah,
that's
a
fair
point,
but
I
think
the
the
the
the
the
question
that's
lingering
in
my
mind
there
is:
how
does
this
change
when
you
have
when
you
have
another
CCA,
that's
actually
going
to
force
the
deliverations
you're
talking
about
in
the
sense
of
yeah
I'll
hold
off
on
my
question:
I
need
to
formulate
it
better
but
I'm
going
to
move
on
to
Christian
who's
next,
in
line.
B
K
K
B
L
Hi,
thank
you.
I
I
did
read
your
paper.
It
was
now
a
couple
of
months
ago.
You
know
just
after
sitcom
so
but
I've
screwed,
all
over
it
and
I
can
deal
with
that
on
the
list.
But
I
just
have
one
question
really
to
see.
If
I
can
get
this
right,
when
you
say
that
these
things
could
happen,
and
then
you
said,
how
realistic
is
it
still
not
really
Quantified?
K
So
that's
a
good
question.
What
we
have
found
is
that
there
are
phenomena
that
we
know
up
here
on
the
internet
like
just
rtt
different
flows
of
different
rtps
that
share
the
link
which,
if
they
would
happen
if
starvation
would
occur.
We
have
also
found
that
this
is
not
hard
to
reproduce
at
all
on
test
Networks,
where
you
know.
K
B
K
To
add
Jitter
the
operating
system
Jitter
was
enough
to
cause
this
problem
operating
systems
that
are
coupled
with
the
emulation.
But
if
you
have
not
Quantified
exactly
how
often
this
happens
on
the
internet
and
I
also
think
that's
it's
almost
impossible
to
do
that.
I
mean
I,
won't
say
impossible,
but
we've
tried
and
the
data
is
hard
to
look
at
because
you
don't
actually
know
when
two
flows
are
sharing
a
bottleneck,
because
even
if
they're
sharing
a
link,
that's
congested,
maybe
one
of
the
flows
is
genuinely
congested
somewhere
else
and
I.
E
L
Well,
it's
not
sorry,
it's
not
it's
a
different
question
and
that
is
of
these
possible
techniques
around
the
problem
on
the
last
slide.
Is
there
also
the
category
of
congestion
controls
but
like
delay
gradient
where
you're
looking
for
correlation
between
your
sending
Behavior
and
the
congestion,
so
that
you
can?
You
can
distinguish
that
from
the
non-congestive
delay.
K
L
E
That's
an
interesting
thought:
I
think
that
venka,
you
might
be
familiar
with
someone,
it's
basically
frequency
analysis
of
traffic
and
and
using
some
of
that
information
to
figure
out.
If
now,
there's
I,
I,
I'll
I
just
say
that
this
is
I.
Would
personally
love
to
see
some
thought
put
into
the
confounding
factors.
Right
I
mean
starvation
is
possible,
but
that's
also
happening
in
a
particular
in
in
a
certain
vacuum,
with
a
certain
spherical
cow
right.
So
that's
that's
what
you're,
showing
and
I
think
that's
that's
absolutely
reasonable.
E
The
question
is:
how
confounding
are
the
confounding
factors
of
reality?
I,
don't
know
how
to
articulate
that
any
better,
but
I
think
it's
important,
because
what
you
do
see
is
that
something
like
bbr,
like
you
said,
despite
the
fact
that
it
gets
even
capped
most
of
the
time,
which
is
what
we
understand
it
to
be
it.
It
does
end
up
doing
what
it
does,
and
there
are
a
lot
of
reasons
why
things
get
deployed,
and
the
question
to
me
is:
how
do
you
figure
out
how
relevant
the
result
is
to
real
traffic?
E
It's
it's
a
it's
a
variant
of
what
Bob
was
asking
for,
but
just
slightly
differently
formulated.
K
So
for
that
I'd
like
to
say
so,
I
I
absolutely
agree,
and
what
I
believe
has
happened
is
that
operators
have
kind
of
implicitly
understood
this,
not
in
in
exactly
the
starvation
and
congestion
control
is
the
problem
terms,
but
we've
deployed
a
ton
of
isolation
on
the
internet.
You
know
from
bue
and
so
on,
to
fair
queuing
to
just
the
relate
limiting
people
that
the
entry
to
the
network
and
therefore
in
in
the
internet.
K
It
is
not
very
common
that
two
La
long-running
flows
actually
compete
on
the
same
bottleneck
when
they
do
I
think
this
problem
happens,
but
I
think
most
of
the
internet
basically
works
because
we've
prevented.
You
know
the
the
original
end-to-end
vision
of
the
internet
where
all
of
the
intelligences
at
the
end
hosts,
and
we
put
a
lot
of
intelligence
in
the
middle.
E
Yep,
that's
fair
and
true,
anyways
I
will
I
will
leave
it
here.
Thank
you.
Venkat
for
presenting
and
I
want
to
welcome
you
to
join
the
mailing
list
where,
hopefully,
Bob
will
post
your
comments
and
hopefully
you'll
have
a
good
forum
to
engage
with.
So
please
join
the
mailing
list.
Let's
continue
this
discussion
there
and
we'd
love
to
see
you
at
at
Future
iccrg
meetings
and
welcome
you
to
see
on
for
the
rest
of
this
one
and
enjoy
the
rest
of
the
presentations.
Okay.
E
So
let's,
let's
thank
venkat
again.
First
time
presenter
into
ICC.
E
And
now
moving
on
to
the
next
one,
so
we
are
sorry
hang
on.
Let
me
pull
this
I'm
going
to
be
talking
about
the
challenges
and
benefits
of
precisely
specifying
condition:
control,
algorithms,
Lenore,
Lenore,
Lenore
I
can't
get
the
accent
right.
I'm,
sorry,.
M
Still,
my
name
is
still
in
also
because
it
was
a
second
ago
Ken
McMillan,
who
is
my
collaborator
in
this
work?
Is
here
too,
and
we
are
next,
please
we
are
trying
to
Advocate
formal
specifications
next.
F
M
M
Okay,
what
we
are
trying
to
do
is
to
obtain
formal
specification
for
congestion
control
algorithms.
We
do
it
for
new
Reno
for
no
particular
rationale,
historical
reasons
that
I
cannot
even
remember.
Those
formal
specification
should
allow
us
to
formally
prove
some
high-level
properties
at
the
model
level
of
the
formal
specification
level.
M
We're
never
talking
about
proving
anything
formally
all
the
way
down
and
I
know
that
lots
of
people
here,
don't
like
formal
verification
and
we
don't
particularly
either,
even
though
we've
been
doing
it
all
our
lives,
but
there's
something
else
that
we
can
gain,
and
that
is,
we
can
automatically
generate
tests.
The
truly
stress
test
implementation
against
the
specification
and
that
actually
buys
us
a
lot
next,
please
next,
okay,
so
what
are
formal
specifications?
Formal
specification?
M
We
mean
and
ambiguous
description
of
the
protocol
and
that
will
be
used
as
a
reference
to
what
does
a
protocol
do
by
doing
that,
we'll
end
up
clarifying
the
intent
of
the
protocol
and
all
the
hidden
assumptions
that
are
sometimes
hard
to
find
in
the
natural
language
description?
Next,
please
I
mentioned
the
specification
based
testing,
so
we
now
have
the
natural
language
documentation.
M
We
have
the
formal
specification,
we
have
implementation
in
the
wild
and
we
somehow
needs
to
show
how
they
relate
to
one
another.
So
the
specification-based
testing
allows
us
to
show
that
our
formal
model
Fades,
what's
really
being
done
on
the
implementations,
so
it
allows
us
to
expose
conformance
issues
with
implementation
versus
the
specification
itself.
M
M
M
Next,
we
have
done
it
for
a
quick
several
years
ago
and
in
quick
we
have
the
the
quick
connect,
the
UDP
with
the
application
layer
and
what
we
could
prove
formally
prove
the
property
on
the
on
the
model
where
we
take
the
quick
client
in
the
quick
server
and
the
udps
that
interact
below.
That
is
that
it
really
implements
the
functionality
that
it
satisfy
the
guarantee
that
of
the
user
level.
That
is,
that
the
stream
that
is
sent
is
a
stream
that
is
received
next,
please,
we
also
did
refinement
testing.
M
So
we
cast
both
ways
and
by
that
we
check
the
refinement
that
every
behavior
that
we're
getting
with
our
formal
client
and
real
server
follows
a
behavior
that
is
allowed
by
the
formal
specification.
That's
what
we
mean
by
refinement
next,
please,
and
next
and
as
expected,
we
found
numerous
problems.
That
is
only
to
be
expected
because
it
never
happened
to
me
that
I
looked
carefully
at
a
protocol
without
finding
some
issues
there,
but
we
found
some
things
that
were
interesting
because
those
we
looked
at
four
implementations.
Those
four
implementations
went
to
very
thorough.
M
Interrupt
testing
and
yet
found
new
problems
that
were
not
revealed
by
the
interrupt
testing,
so
we
found,
of
course,
Corrections
for
formal
model
because
it
was
partial
and
it
was
nice
to
find
bugs
and
Nathan
to
be
able
to
correct
it.
Next,
please,
oh
I
added
here
that
the
the
most
strengthened
and
weak,
and
so
what
I
mean
by
strength
and
that
the
formal
protocol
model
was
too
constrained.
M
We
of
course
found
numerous
errors
with
implementations,
as
I
said,
to
be
expected,
conformance
issues,
crashes
that
were
due
to
low
level
bugs
and
implementations,
but
also
things
that
they
did
not
conform
to
the
specs
or
the
RFC
itself
was
ambiguous
and
the
the
reason
we
could
do
it
is
because
the
specification-based
automatic
generation
allows
us
to
generate
much
more
General
stimulus
than
you
would
just
with
interrupt.
Testing
and
I
should
say
that.
Okay
next,
please,
the
surprising
things
is
if
we
found
some
security
vulnerabilities.
M
This
is
surprising
because
we
never
actually
tried
to
intentionally
to
find
security
vulnerabilities
and,
for
example,
we
got
denial
of
surveys
in
the
client
migration
because
of
some
issues
and
implementations
and
then
the
RFC
in
some
issues
and
implementation.
Little
hardly
like
of
leaking
memory
of
disclosing
memory
that
shouldn't
have
happened,
but
I
should
emphasize
that
we
just
found
very
strange
runs,
and
it
is
the
community
and
question
is
out
there
in
cyberspace
in
particular.
M
M
So
we
want
to
do
the
same
with
the
urano
and
it's
really
really
hard
one.
We
don't
really
understand
the
protocol.
I
guess
I
mean
we
I,
don't
really
understand
the
protocol,
the
lots
of
little
behaviors
there
that
I
don't
understand
there
are
Chanel,
but,
moreover,
in
quick,
it's
a
transferred
layer.
It's
sitting
on
top
of
UDP,
it's
very
clear.
What
are
the
assumptions
from
the
network
are,
but
here
to
this
to
look
at
congestion
control.
M
We
actually
need
some
Quantified
model
of
the
network
and
we
just
heard
a
talk
why
it's
really
so
hard
to
get,
but
without
knowing
what
the
network
is.
We
can't
understand
the
properties,
but
anyway
we
can't
understand
what
the
properties
of
congestion
control
are
and
being
here.
For
a
couple
of
days,
it
became
clear
that
one
of
the
things
is
for
different
network
models.
You
may
have
different
requirements
for
congestion
control,
see
I
listened
gory
is
here
nodding,
and
but
we
need
those
like
to
know
what
they
are
before.
We
can
do
it
next.
M
M
So
we
are
asking
you
guys
at
large
in
the
Cyber,
in
the
rest
of
the
meeting,
to
help
us
come
with
reasonable
motto
that
that
you
can
agree
on
some
and
some
quantitative
properties
that
then
we
can
try
and
to
prove
formally
and
to
test
for
conformance
next.
Please
I
said
that
it's
hard
for
us
to
understand
you.
M
M
Then,
as
I
said,
the
quantitative
model
of
the
network
in
Quake
again
the
the
functional
model,
was
really
easy.
Here
we
don't
need
a
functional
model
here
we
need
a
qualitative
model
and
that
is
sorry
quantitative
model,
and
this
is
something
that
yeah
we
can
make
it
up.
We
can
make
one
up,
others
have
been
made
up
in
the
literature
we
can
make
one
up.
But
what
is
this
good
enough?
I
mean
it's
your
judgment.
M
It's
not
ours
when
there's
a
model,
we're
working
on
the
quantitative
model
good
enough,
so
you
accept
it
and
accept
the
proofs
and
we
can't
Define
quantitative
properties
without
the
model.
Next,
please
next,
please,
as
I
said
the
properties
of
new
Reno
in
Quake,
we
have
a
very
simple
functional
guarantee,
so
the
transport
layer-
and
we
can
build
it
on
top
of
that
next,
please,
but
the
congestion
control
guarantees
are
much
harder
to
work
and
we
have
very
ill-defined
networks.
M
Next,
please,
there
has
been
prior
work
on
this,
but
that
assumed
sort
of
an
ideal
aimd
and
we
renoid
the
increments
are
not
constant.
The
multiplicative
decrements
are
not
constant.
It
just
doesn't
fall
in.
If
you
define
properties
like
some
properties,
have
been
defined
in
the
literature
based
on
those
constants
one,
those
constants
aren't,
there
is
no
way
to
interpret
those
properties
and
those
studies
that
we
have
looked
at,
ignore
timeout
and
the
timeout
is
where
we
see
really
weird
Behavior,
just
looking
at
the
behavior
on
paper
or
in
our
pseudocode
or
under
specification.
M
So
why
are
we
here
to
ask
me
help
and
to
try
to
convince
you
that
what
we
can
do
with
formal
specification
is
really
worth
it?
You
actually
buy
a
lot
and
also
formal
verification,
but
much
beyond
that,
and
we
need
help
in
understanding
our
congestion
control
protocol.
It
doesn't
have
to
be
neureno
just
one
that
is
really
used.
That
is
interesting
to
understand
the
properties
and
how
they
relate
to
the
Assumption
from
the
environment,
from
the
network
and
possibly
the
properties
as
a
function
of
the
network,
and
next
please
so.
M
I
was
trying
to
convince
you
that
formal
specification
is
truly
beneficial
at
the
risk
of
repeating
myself.
Yes,
we
can
verify
and
you
all
hate
it,
but
proving
Key
Properties,
the
very
top
level
of
if
you
have
some
properties,
that
you
know
the
high-level
model
satisfies,
and
then
you
know
that
the
implementation
conforms
to
the
model.
You
will
not
be
able
to
establish
Normandy.
That
is
really
that's
really
a
rabbit
hole.
M
The
formal
specification
is
really
important
and
useful,
but
to
make
it
work,
we
need
to
get
definition
of
the
network
of
the
environment
in
which
a
congestion
control,
algorithm
works
and
then
the
property
it
is
supposed
to
satisfy
as
a
function
of
the
environment
it
operates
under.
Thank
you.
E
Thank
you,
Lenore.
We
got
Bob
and
gory
in
the
queue
and
I'm
going
to
close
the
queue
after
that
Bob.
L
Hi
I
think
it
would
be
useful
if
we
knew
how
networks
what
how
you
would
model
a
network
and
we
don't,
but
even
if
we
did
I,
think
I
think
we'd
have
to
constrain
it
to
certain
types
of
network.
I,
remember
talk,
I,
don't
know
how,
many
years
ago,
a
couple
of
years
ago,
five
years
ago,
maybe
where
someone
was
showing
that,
for
instance,
a
radio
network
scheduler
actually
changes
its
Behavior
depending
on
what
you
do
and
so
you're
not
going
to
be
able
to
model
back
because
that's
Secret.
L
So
so
the
more
you
push
it.
Sometimes
it
gives
you
a
bit
more
and
then
it
stops
giving
you
a
bit
more
and
and
those
algorithms
are
not
they're
deliberately
proprietary.
So
it's
not
a
the
network
environment,
isn't
independent
of
the
congestion
control.
L
L
A
So
Gary
Ferris
speaking,
there's
an
individual
that
was
super
I
really
enjoyed
it.
Thank
you
the
thing
I
take
away
is
we
don't
actually
know
the
questions
you're
asking,
so
we
can't
dance
to
them,
because
these
questions
are
hard.
Maybe
we
don't
want
to
model
for
the
network.
Maybe
we
want
a
model
for
what
we
want
the
network
to
respond
to.
A
Maybe
we
want
to
know
where
the
corners
are,
because
I
really
go
with
Bob,
that
real
Networks,
even
local
DSL
networks
have
lots
of
intelligence
and
their
things
very,
very
hard
to
predict,
but
we
do
perhaps
know
where
the
sensitivities
are
and
how
to
design
against
them,
and
we
could
test
that
that
would
be
super,
so
it'd
be
really
nice
to
write
them
down.
Wouldn't
it
sure.
A
M
E
Yeah,
thank
you
for
the
comment.
Gauri
Colin.
Do
you
have
a
comment
as
a
chair
as
yes,
please
come
on
in.
C
That's
it
hi,
Colin,
Perkins,
sorry
chair
over
right,
just
a
very
quick
advertisement,
there's
a
some
discussion
about
bringing
up
a
creating
a
new
irtf
research
group
on
the
topic
of
usable,
formal
Methods
at
the
side
meeting
on
Thursday
lunchtime
in
I,
forget
which
room,
but
it's
on
the
side
meeting
list.
Please
do
come
along
if
you're
interested
in
this
topic.
M
E
Thank
you
for
that.
Ian
I'm
gonna.
Ask
you
to
take
your
question
offline,
but
thank
you.
That
was
a
great
presentation
and
discussion.
We've
got.
We
are
running
way
behind
on
the
agenda,
so
you
may
have
noticed.
We
have
19
minutes
left
and
about
35
minutes
of
agenda
time
left
to
go
through,
which
we
obviously
can't
do
so
ingamar.
Do
you
want
to
come
and
you
want
to
do
a
presentation?
Can
you
can
you
shorten
it
if
you
could,
that
would
be
very
helpful.
Marcelo.
E
The
option
is
to
do
both
of
your
presentations
or
to
choose
one
of
them.
I'll
leave
that
up
to
you
all
right,
go
for
it
anymore,
okay,
yeah,
so
I'm
having
all
kinds
of
trouble
here.
Simon.
Can
you
bring
English
presentation
up
here,
stop
sharing
first
and
then
start
again,
yeah.
N
Government,
okay,
you
can
go
directly
to
the
second
slide
right
and
what's
the
first
on
going
to
describe
a
bit
about
how
the
l4s
implementation
is
done
in
the
radio
Access
Network
that
we
had
on
display
on
the
at
a
hackathon
this
week
and
then
on
Monday
as
well-
and
you
probably
know
a
lot
about
this-
and
we
can
see-
we
have
an
application
sort
of
an
application.
N
Clients
and
you've
got
the
traffic
either
in
Uplink
or
in
downlink
or
both,
and
what
we
do
actually
in
that
is
that
we
do
the
congestion
marking
regardless.
If
it's
upling
go
down,
they
could
actually
do
it
and
the
radio
Access
Network
in
the
base
station
and
because
the
protocol
stack
in
the
radio
is
encrypted
by
below
the
p2cp
layer.
N
N
So,
there's
nothing
much
about
the
intention
with
the
forensic
culture
that
you
you
get
congestion
marking
just
before
you
start
to
build
up
two
or
when
you
detected.
You
begin
to
build
up
that
you
and
we
can
take
the
next
slide
please,
and
what
we
do
is
that
we
for
ease
of
implementation
and
also
when
the
specification
in
3dp
standards,
we
set
up
a
separate,
a
separate
barrier
for
the
l4s
traffic.
N
That
means
that
the
user
plane,
trans
user
plane
functions
UPF
here
you
detect
that
you
have
a
flow
with
a
given
power
sample
that
is
still
for
escape.
It's
just
the
easy
ect1
code
point.
Then
we
steal
that
Twitter
they
dedicated
better
but
hostile
for
S,
marking
and
and
the
dedicated
doesn't
mean
that
this
is
a
higher
priority
or
anything
like
that
than
the
normal
mobile
broadband
error.
N
It's
just
as
a
dedicated
that
makes
it
easy
to
congestive
marketing,
because
you
steel,
Alfred
traffic
into
that,
and
you
can
keep
the
latency
flow
as
long
as
the
l4s
flow
complied
to
a
congestion
control
that
is
Alpha
escapable.
N
So
I
believe
you
can
take
the
next
slide
here
and
we
and
press
one
important
thing
is
that
you
do
the
congestion
control
for
5G
networks,
and
here
we
have
an
example
of
the
resource
allocation.
That
means
that
you
have
a
resources
in
time
and
foreign.
N
N
The
only
thing
is
that
the
actual
throughput
that
you
get
is
that
is
depending
on
the
number
of
resource
blocks
you
have
and
then
the
transmission
congestion
conditions.
You
cannot
pause
loss
due
to
distance
and
in
higher
frequency
bands
you
can
have
a
trees
that
are
in
the
way
that
reduce
the
the
number
of
bits
that
you
can
transmit
transmit
per
resource
that
you
have,
and
that
means
that
you
have
kind
of
fast,
fading
and
slow,
fading
and
throughput
varies
sometimes
to
pretty
often
fast
trading.
N
Is
you
know
in
a
master
of
milliseconds
slow
pain,
and
that
is
more
like
when
you
go
around
move
around
the
corner
and
and
and
depending
on
the
movement,
and
things
like
that,
and
on
top
of
that
you
can
have
a
difference
in
the
actual
throughput,
depending
if
you
have
a
losses
on
the
Mac
layer
that
temporarily
reduce
the
tube,
throughput
and
and
Dr
rough
part
of
it,
and
we
can
take
the
next
slide
actually
and
depending
on
how
much
time
I
have
I'm
not
very
much
I
believe
we
can
take
this
slide
on.
N
Look
one
question
that
is
up:
should
we
use
alpharettes
for
everything
by
heart,
I
think
we
should
it's
a
it's
a
really
good
thing
to
yourself
right,
but
there
are
some
consideration
to
think
about
is
that
we
have,
as
we
have
a
fast
fading,
but
in
particular
we
have
a
radiation
that
goes
up
and
down
is
throughput
on
a
very
short
time
scale.
N
N
So
there
is
a
trade-off.
You
want
highly
linked
utilizations
or
small
Network
buffers
and
some
somewhat
lower,
so
I
know.
One
thing
is
that
this
generally
more
efficient
to
transmit
larger
chunks
today,
they've
got
both
the
data
in
above
the
transmit.
It's
a
more
efficient
radio,
wise
to
transmit
it
and
referring
back
to
Monica's
presentation,
you
can
save
some
battery
some
energy
and
and
the
other
side
usage
can
increase
the
overall
capacity
in
a
radio
National
every
data
to
transmit
and
also
things
like
a
multi-user
Mimo.
That
generally
works
better.
N
If
you
have
the
data
and
the
buffer
to
transmit
so
there's
a
these
are
concentrate
considerations
and
I.
Believe
if
you
have
a
really
latency
sensitive
traffic,
interactive
tradition,
then
of
course
you
should
use
self
rest.
But
if
you
have
a
things
like
video
and
demand,
but
also
the
second
buffer
in
the
clients,
then
it
could
be
better
to
consider
using
classic
the
traffic
instead
and
one
possible
Middle
Ground,
even
even
though
the
issues
that's
been
in
Cut
here.
N
This
place
is
that
can
consider
and
from
the
rate
based
algorithms,
even
though
they
are
not
perfect.
So
there
are
some
some
human
Trader
things
here
so
not
sure
how
much
time
do
I
have
water
cut
here
or
take
the
discussion
offline
later
on
or.
N
I,
don't
believe
it
will
be
around
here
the
entire
week
to
have
our
questions
around
this
time.
Please
do.
E
Reach
me,
thank
you,
Mark,
please
yeah,
please
reach
out
to
England
or,
of
course,
I
want
to
engage
folks
to
use
the
list
as
well
and
continue
discussion.
There.
O
Okay
from
uc3m
I'll
try
to
do
my
best
with
the
very
few
minutes
that
I
have
left,
so
I
have
been
doing
some
experiments
with
leadbat
and
leadbat,
plus,
plus
and
bbr.
As
you
know,
both
let
that
box
plus
and
DVR
are
congestion.
Control
algorithms
are
being
specified
in
this
in
this
research
group.
O
So
let
let
that
plus
plus
is
let's
be
invested
for
transport
right,
which
is
targeting
to
get
60
milliseconds
Q,
okay,
Q,
delay
DVR,
on
the
other
hand,
is
a
best
day
for
transport
that
is
aiming
to
work
as
close
as
possible
to
the
optimal
operation
point
which
is
essentially
without
queuing
delay
right.
So
you
can
see
that
some
things
here
may
not
go
as
well
as
planned
next
slide,
so
the
the
experimental
setup
that
we
did
is
essentially
this.
So
we
have
two
servers.
O
O
So
this
is
the
first
experiment.
So
what
we
have
done
here
is
we
have
one
level
plus
plus
flow
one
one
flow
and
sorry
one
PVR
flow,
a
b
vrb
one
flow
right,
and
we
have
done
a
series
of
experiments
using
different
base
rtds
right.
What
you
would
expect
in
this
case
is
that,
since
laptop
plus
plus
is
a
less
than
F
best
less
than
best
for
flow
and
bbr
is
the
best
effort
flow.
Is
that
let
that
plus
plus
yields
in
front
of
PBR?
O
But
we
don't
observe
that
for
all
the
possible
rtps
right,
we
observe
that
that's
true
for
rtts
that
are
larger
than
60
milliseconds.
However,
for
rtps
that
are
smaller
than
60
milliseconds,
we
can
see
that
if
they
share
the
flow,
the
the
capacity
even
fell.
If
you
want
so
essentially
what
what
seems
to
be
happening
here
is
that
bbr
has
a
flight
size
cup
that
is
essentially
one
bdp.
O
We
have
done
a
few
other
experiments
confirming
that
this
that
this
is
essentially
the
case,
so
so
in
the
next
three
slides
and
so
move
to
the
next
slide.
We
have
a
varai,
the
the
rtt
for
the
for
the
bbr
flow,
but
keep
the
same
fixed
rtt
for
the
netball
flow.
We
observe
the
same
behavior.
This
means
that
it
only
depends
on
the
bbr
r
rtt
next
slide.
O
The
result
do
not
vary
on
the
capacity,
so
we
test
for
different
capacities
and
we
we
always
obtain
the
same
same
results
for
the
different
rtps
next
slide.
Please,
and
if
you
have
several
flows
of
each,
what
we
observe
is
that,
no
matter
how
many
BBA,
how
many
lead
blood
flows
we
have
to
a
mix,
they
always
the
the
aggregate
of
all
the
laptop
plus
flows.
Capacity
for
all
the
level
plus
plus
flows
is
still
more
or
less
the
same.
O
So
all
these
previous
experiments
were
performed
using
a
large
buffer
able
to
sustain
a
60
milliseconds
Q.
If
you
reduce
the
the
60
the
buffer,
so
you
cannot
reach
the
60
millisecond
queue
right.
What
we
observe
is
that
the
point
at
which
a
level
plus
yield
is
essentially
the
size
of
a
buffer
right,
the
the
sorry,
the
rtt,
which
fits
in
the
size
of
the
buffer
right
so
for
rtps
that
are
smaller
than
the
size
of
the
buffer.
We
have
this
this
equal
split
and
for
rtts
that
are
larger
than
the
size
of
offer.
O
O
So
what
are
possible
solutions
if
we
want
to
actually
achieve
that
level
plus
yields
in
front
of
DVR?
So
what
what
we
did
is
because
we
have
a
lab
plus,
plus
implementation
working.
What
we
did
is
simply
change
the
level
plus
implementation
to
define
the
target
to
be
the
minimum
of
the
current
Target
60
milliseconds
and
the
base
rtt.
This
basically
will
imply
that
level
plus
can
only
generate
a
q
of
one,
a
rtt
or
one
additional
rtt.
O
If
you
do
that,
that
actually
works
right
and-
and
you
obtain
the
the
results
that
we
can
find
here-
that
that
essentially
a
level
plus
with
this
modification
yields
in
all
cases,
and
that
that's
that's
one
part,
so
the
second
part
is.
We
also
find
some
interesting
results
when
we
were
looking
into
how
these
two
congestion
controllers
measure
the
base
rtt.
So
both
these
congestion
control
algorithms
perform
periodic
slowdowns
to
measure
to
accurately
measure
the
base
RTD
right.
O
O
So
this
is
one
experiment,
one
Trace
from
one
experiment,
and
basically
what
we're
showing
here
is
that
these
slowdowns
for
the
different
bbr
and
let
that
plus
plus
are
not
synchronized.
They
are,
they
do
not
synchronize,
they
do
not
slow
down
at
the
same
time,
and
this
is
essentially
the
results
that
they
miss
estimate
the
basicity,
and
that
implies
that
they
consider
that,
and
that
implies
that
they
are
aiming
to
a
different
queue,
and
this
implies
that
they
will
they
will.
O
They
will
try
to
send
more
aggressively
than
they
should
okay,
so
next
slide.
So,
of
course,
suppose
possible
solution
is
for
to
make
both
of
them
to
use
the
same
slowdown
mechanism
right.
Actually,
we
believe
that
that
we,
they
should
be
using
the
the
if
they
need
to
choose
one,
they
should
be
using
the
bbr1,
because
we
think
that
letter,
the
one
that
left
users
doesn't
actually
work
in
all
cases.
O
I
have
some
work
out
some
examples
on
how
this
could
be
the
case,
but
one
possible
solution
for
would
be
for
both
of
them
to
use
the
bbr
one
right
and
next
slide,
and
this
brings
us
to
to
my
next
thought
that
is
basically,
whether
we
it
wouldn't
be
necessary
to
refine
some
congestion
control,
algorithm
invariants.
That
basically
means
that
all
congestion
control
should
Implement
some
things.
O
E
O
O
There
can
be
over
more
a
relaxed
way
of
achieving
that
right,
finding
some
some
form
of
conditions
that
will
imply
that
all
of
them
slow
down
at
the
same
time,
even
though,
if
they
don't
do
the
same
thing
right,
but
if
we
want
to,
for
instance,
measure
the
base
rtt,
we
shoot
all
of
them
try
to
enter
your
Q
at
the
same
time,
because
if
they
don't,
you
won't
be
able
to
enter
your
Cube.
E
And
maybe
even
do
it
at
the
same
period
right
yeah
that
too
so
there's
a
question
in
chat
from
Ian,
which
implementation
of
bb
or
V1
and
let
by
plus
plus,
are
these
and.
O
So
the
bbr
we
want
the
one
that
it's
in
in
the
Linux
implementation
and
the
the
laptop
plus
plus
the
one.
That's
in
the
window,
server
the.
E
A
A
On
the
table,
when
you
say
do
it
at
the
same
time,
do
you
mean
at
the
same
time
where
they
have
vastly
different
rtts?
How
does
that
work.
O
I
would
think
that
if
the
period
is
similar
they
should
they
should
they
should.
It
should
work
out,
I
mean
and
what
bbr
does
is.
It
says
it's
at
least
two
rtps
and
at
least
200
milliseconds,
The
Slowdown,
so
that
they
overlap
that
that
that's
that
that
would
be
my
my
take,
but.
O
E
I
do
want
to
say
very
quickly,
Colin.
Yes,
we
should
look
for
a
bigger
room
next
time
and-
and
we
will
do
that
and.
C
E
Thank
you
and
thank
you
for
being
a
bit
flexible.
As
always,
this
was
the
packed
agenda
and
we
are
really
pleased
to
have
had
wonderful
discussions
today.
Thank
you.
Everybody
I
want
to
thank
Simone
again
for
helping
out
all
over
and
enjoy
the
rest
of
the
IDF
and
look
forward
to
discussion
online.
Thank
you.
Folks,
foreign.