►
From YouTube: IETF 117 IRTF Open
Description
IETF 117: IRTF Open
The Internet Research Task Force (IRTF) Open session, including Applied Networking Research Prize (ANRP) presentations will held during IETF 117 at 2000 UTC on 27 July 2023. ANRP Presentations will include: Simon Scherrer for his work on modelling the BBR congestion control algorithm, and Siva Kakarla for his work on verifying the correctness of nameservers.
A
A
A
Good
afternoon
everybody
this
this
will
be
the
irtf
open
meeting,
can
I
just
check
the
audio
is
working
in
the
room
and
to
the
remote
people.
A
A
So,
in
that
case,
we
should
get
started
welcome.
This
is
the
irtf
open
meeting
at
itf417
in
San
Francisco.
My
name
is
Colin
Perkins
I'm,
the
irtf
chair
and
I'll
I'll
be
running
this
meeting
today,
so
I'd
like
to
start
with
the
the
usual
administrative
discussion.
First
of
all,
a
reminder
that
the
irtf
follows
the
iatf's
intellectual
property
disclosure
rules.
A
So
by
participating
in
in
this
meeting
and
in
the
other
irtf
meetings
taking
place
this
week,
you
agree
to
follow
the
disclosure
processes
and
policies
and
in
particular,
if
you're
aware
that
any
any
contributions
you
make
are
covered
by
patents
or
patents
applications.
What's
our
owned
or
controlled
by
you
or
by
your
sponsor,
then
you
must
disclose
that
effect
or
not
participate
in
the
discussion
and
the
links
on
the
slide
have
full
details
of
the
policy.
A
In
addition,
I'd
like
to
remind
you
that
we
routinely
make
recordings
of
these
meetings
and
that
these
meetings
will
be
recorded
and
the
recordings
will
be
published
online.
A
If
you
don't
want
to
be
photographed
or
recorded,
then
you
can
wear
one
of
the
red
do
not
photograph
lanyards.
A
However,
I
will
remind
you
that
this
meeting
is
going
out,
live
on
the
streaming
and
will
be
on
YouTube.
So
if
you
make
comments
at
the
microphone,
then
you
are
likely
to
be
recorded
and
as
I.
A
In
addition,
I'd
like
to
remind
you
of
the
code
of
conduct,
you
know
it
is
important
that
we
all
work
respectfully
with
each
other
in
these
meetings
and
we
try
to
make
this
a
pleasant
environment
for
everyone.
So
please
do
follow
the
code
of
conduct.
If
you
have
any
concerns
about
the
behavior
of
any
of
the
participants,
please
either
contact
me
or
contact
the
ombuds
team
and
the
contact
details
are
on
the
slide.
A
Participants
in
order
that
we
can
keep
track
of
who's
attending
these
meetings,
and
so
we
can
get
an
appropriately
sized
room
next
time.
Please
do
sign
in
either
by
scanning
the
QR
code
on
the
screen
at
the
front
or
or
by
scanning
the
the
QR
code
on
the
blue
sheets
that
is
about
to
start
circulating
or
by
signing
in
via
the
data
tracker.
If,
once
you
sign
in
to
meet
Echo,
then
that
will
record
your
attendance
and
ensure
we
get
an
appropriately
sized
room
in
future.
A
Remote
participants
make
sure
your
audio
and
video
are
turned
off
unless
you're
actively
trying
to
talk,
and
we
will
and
for
everybody,
we'll
be
using
a
single
unified
queue.
So
if
you
want
you
to
join
the
queue
and
ask
questions,
then
you
should
do
so
in
meets
Echo
if
you're
on
your
phone
or
using
full
clients
on
your
laptop
and
then
we'll
manage
a
single
queue
for
both
the
local
and
the
remote
participants.
A
A
A
We
can
publish
informational
or
experimental
documents
in
the
RFC
series.
We
do
that
on
occasion
we
have
a
I
will
mention
a
few
of
those
that
have
been
published
by
the
irtf
recently
in
a
later
slide,
but
the
primary
output
of
the
research
groups
is
much
more
commonly
understanding
and
research
results
that
are
disseminated
by
publishing
papers
rather
than
by
publishing
rfcs.
A
The
irtf
is
organized
as
a
number
of
research
groups.
There
are
I,
think
16
research
groups
currently
active
this
highlighted
in
dark
blue
on
the
slide
are
still
to
meet.
We
have
the
network
management
research
group
in
a
couple
of
hours
time
and
then
tomorrow
morning
we
have
the
measurements
and
Analysis
of
protocols
group
and
the
previous
privacy
enhancements
and
assessments.
A
A
small
bit
of
research
group
news
I'm
saddened
to
announce
that
Janna,
ayenga
and
Michael
Shapiro
are
both
stepping
down
as
ICC
RG
chairs.
Thank
you
both
for
your
service.
Jonah
especially
has
been
chairing
that
group
for
about
the
last
decade
or
so,
and
has
made
a
tremendous
contribution
to
the
to
that
research
group
into
the
irtf
Michael
Jordan
more
recently,
but
has
also
made
a
great
contribution
in
the
time
he
was
available.
A
Both
of
these
with
these
people
have
had
changes
in
their
day
jobs
and
increases
in
their
workload.
That
meant
they
had
to
step
down
from
the
chairing.
This
group.
B
A
A
In
addition,
I
am
I
guess
pleased
because
it
has
completed
its
work
but
saddened
because
we're
losing
a
research
group
to
to
announce
that
the
the
coding
for
network
Communications,
the
network
coding
research
group
will
be
closing
after
this
meeting
because
it's
finished
its
work.
A
Thank
you
very
much
to
the
two
chairs
of
that
group.
Recently
Vanessa
Ruka
and
Marie
Jose
montpeti
to
the
all
the
currents
and
the
passive
participants
in
the
group.
It's
been
a
very
successful
group.
It's
produced
a
number
of
documents
which
have
been
published
as
rfcs,
including
a
couple
that
have
just
finished
the
publication
process
and
been
published
in
the
last
few
weeks.
A
Coincidentally,
there
was
a
a
really
nice
survey
came
out
of
a
seminar
series
of
Technical
University
of
Munich,
which
Marie
Jose
found
a
few
weeks
back,
which
talks
about
the
work
of
the
group
and
surveys.
The
the
progresses
made
over
the
last
few
years
and
I
would,
if
you're
interested
in
the
topic
of
network
coding,
I'd,
encourage
you
to
look
at
the
results
of
the
group.
Look
at
the
survey
which
discusses
what
the
group
did
they've
had
their
very
successful
group
as
I
say
a
number
of
good
documents.
A
A
Finally,
I'd
just
like
to
mention
that
the
decentralized
internet
infrastructure
research
group
has
just
completed
a
a
re-chattering
which
was
approved
by
the
internet
architecture
board
earlier
this
week.
A
The
details
of
the
charter
are
on
the
data
tracker.
The
key
changes
that
it's
is
that
the
group
has
changed
its
name
from
decentralized
internet
infrastructure
to
be
the
the
decentralization
of
the
internet's
research
group.
A
Originally,
when
it
was
chartered
this
group
very
much
focused
on
decentralized,
Technologies
and
protocols,
you
know
peer-to-peer
systems,
blockchain
based
systems,
distributed
Ledger
systems
and
and
so
on,
with
a
minor
focus
on
the
various
economic
aspects
and
economic
drivers
and
roadblocks
over
the
years.
A
The
focus
of
the
group
has
gradually
shifted
away
from
the
protocols
and
Technologies
and
more
towards
the
the
economic
aspects
of
centralization
and
decentralization
and
understanding
the
drivers
for
for
the
changes
in
in
the
the
internet
and
and
the
way
the
internet
and
its
Protocols
are
developed,
and
this
change
in
in
the
chatter
in
this
recharge.
It
just
reflects
that
shift
towards
the
the
non-protical
aspects
of
centralization,
I
guess
so
I'm.
Looking
forward
to
the
the
work
from
that
group,
they
had
a
really
nice
meeting
earlier.
A
This
week
we
had
a
talk
from
Curry
Doctor
Who,
which
was
really
nice
about
some
of
the
that
the
Dreyfus
for
centralization
and
some
nice
economics
talks
as
well.
So
I'd
encourage
you
to
look
at
the
the
recording
of
that
session
and
and
to
participate
in
the
group
going
forward.
A
As
I
said,
the
the
main
focus
of
the
RTF
is
more
on
research
papers
and
understanding
and
less
on
publishing
rfcs,
but
the
aitf
can
publish
rfcs
and
in
the
in
the
last
few
few
months
since
the
the
previous
meeting,
we
we
have
published
several
rfcs
two
of
these,
the
the
tetris
on
the
flying,
Network
coding
protocol
and
the
bats
coding
scheme
for
for
multi-help
transport
are
the
last
two
outcomes
of
the
the
network
coding
research
group
and
are
both
Network
coding
protocols
and
the
privacy
enhancements
and
assessments
group
also
published
a
couple
of
rfcs
on
the
history
and
use
of
transient
numeric
identifiers
and
some
other
privacy
challenges
in
generating
and
using
a
priscal
identifiers.
A
So
please
do
have
a
look
at
those.
You
know
some
nice
rfcs
coming
out
of
the
the
irtf
there.
A
The
applied
networking
research
prize
recognizes
some
of
the
best
recent
results
in
applied
networking.
It
recognizes
interesting
new
research
ideas
which
are
potentially
of
relevance
to
the
standards
community,
and
it
recognizes
some
of
the
upcoming
people
that
we
hope
will
have
an
impact
on
internet
standards
and
Technologies
in
the
coming
years.
A
We're
very
proud
to
organize
this
award
and
we're
very
grateful
to
the
internet
Society
to
Comcast
and
to
NBC
Universal,
who
support
us
in
this
activity
and
make
it
possible
I.
Think
it's
a
really
great
award.
It's
been
running
for
a
number
of
years
now
we've
had
some
some
fantastic
papers
and
some
fantastic
people
I
received
these
Awards
so
far,.
A
We're
going
to
make
making
two
awards
today
again
two
two
fantastic
papers:
two
fantastic
people
getting
the
awards
today,
Simon
Shearer
will
be
receiving
the
award
for
his
work
on
modeling,
the
bbr
congestion,
control,
algorithm
and
Shiva
kakala
will
be
we'll
also
receive
the
work
there
was
for
his
work,
verifying
the
correctness
of
name
service.
A
Is
it
the
slides
are
all
up
on
the
website
and
we
are
going
out
live
now
and
the
recording
will
also
be
on
YouTube
afterwards.
So
look
out
for
that
in
a
couple
of
minutes,
let's
say.
A
We
also
organized
the
applied
networking
research
Workshop
that
took
place
on
Monday
this
week.
It's
an
annual
event
which
we
we
organize
in
conjunction
with
ACM
sitcom.
A
The
anrw
is
a
an
academic
Workshop,
the
papers,
the
proceedings
are
published
in
the
ACM
digital
library
and
it's
a
peer-reviewed
workshop
as
I
say
we
had
a
fantastic
workshop
on
Monday.
We
had
a
great
keynote
from
Dave
Levin,
a
really
nice
panel
discussion
about
the
future
of
the
internet
and
a
bunch
of
really
good
research
papers.
So
I'm
really
pleased
by
how
well
that
came
out.
A
Thank
you
very
much
to
the
organizers
there
Francis
Yan
from
Microsoft
research
and
Maria
apostolaki
from
Princeton
for
all
their
effort.
Putting
that
together.
Thank
you
to
the
reviewers
to
the
offers
to
the
speakers.
As
I
say,
it
was
a
fantastic
Workshop
this
week
and
if
you
were
not
able
to
attend,
please
do
look
at
the
papers.
The
program
link
is
linked
from
the
slide
here.
I'll
watch
the
recordings
that
are
on
YouTube
and
are
also
linked
from
the
program
page.
A
I'm
also
pleased
to
announce
that
the
2024
applied
networking
research
Workshop
will
co-locate
with
the
Vancouver
ITF
next
summer,
I
guess
in
the
the
northern
summer
next
year
in
July
2024,
the
organizers,
there
will
be
Simone
Ferlin
from
Red
Hats
Ignacio
Castro
from
Queen
Mary
University
of
London
no
details
on
on
the
website
yet,
but
the
the
link
on
the
website
will
become
live
in
in
a
few
weeks.
A
A
And
I'm
also
very
pleased
to
to
to
say
that
we
we
have
managed
to
offer
a
significant
number
of
travel
grants
to
attend
this
meeting
today
and
and
the
the
meetings
this
week.
A
We
provide
travel
grants
for
early
career
career
academics
and
for
PhD
students
with
a
particular
Focus
for
a
particular
focus
on
those
from
underrepresented
groups
to
be
able
to
attend
the
the
artf
meetings
co-located
with
the
ietf
I
think
we're
able
to
award
12
travel
grants
this
time,
so
it's
we've
had
a
fantastic
set
of
people
and
I
am
very
grateful
to
to
Akamai
to
Comcast
and
to
Netflix
for
the
sponsorship
that
makes
this
possible
look
out
again
for
details
of
the
travel
grants
for
the
the
coming
a
ETF
and
artf
meetings
in
Prague
later
this
year.
A
The
details
will
become
available
on
on
the
link
on
the
slide
again
in
a
few
weeks
time.
Thank
you
to
the
sponsors
and
please
please
contact
me
or
contact
Stephanie
from
the
Secretariat.
If
you
are
interested
in
helping
expand,
these
programs.
A
All
right,
so
that's
all
I
have
to
say
our
agenda
for
today
in
we
we
have
a
the
the
the
plane
networking
research
price
talk
from
Simon
sharer
in
in
a
second
we'll
be
talking
about
model
based
insights
performance
on
fairness
and
stability
of
bbr.
A
We
were
supposed
to
have
the
the
second
award
talk
from
Shiva
kakala.
Unfortunately,
Shiva
has
gotten
sick
and
is
unable
to
attend
today.
So
we
all
have
to
to
reschedule
that
to
the
I
guess
to
the
November
meeting,
so
I'd
like
to
invite
Simon
up
to
to
give
his
Awards.
A
Okay,
so
this
is
please
join
us
that
the
the
award
talk.
Today's
given
by
Simon
Sharer
Simon
is
a
50-year
PhD
student
in
the
network
security
group
at
DTA,
Zurich
and
he's
advised
by
where
he's
advice
by
Adrian
perrick
in
this
research
Simon
has
specialized
in
modeling
Network
Dynamics,
including
congestion,
control
path,
selection
and
ISP.
Competition
is
what
talk
is
entitled:
model-based
insights
on
the
performance,
fairness
and
stability
of
the
bbr
congestion
control
protocol.
A
B
It's
here,
okay,
so
hey
hello,
everyone
before
I
start
I
would
like
to
express
my
enormous
gratitude
to
the
irtf
for
the
anrp
and
for
the
opportunity
to
present
my
work
here
today.
Specifically
I
would
like
to
present
my
work
on
model-based
insights
on
the
performance,
fairness
and
stability
of
bbr.
B
In
summary,
in
this
work
we
construct
the
first
fluid
model
for
the
bbr
congestion
control
algorithm.
We
experimentally
validate
this
model
and
we
derive
new
insights
from
it.
To
start,
I
would
first
like
to
take
a
look
back
at
the
Journey
of
BPR.
So
far,
so
bbr
is
a
relatively
recent
contextual
control
algorithm.
It
has
first
been
presented
by
Google
in
2016..
However,
PBR
is
already
widely
used.
It
was
quickly
enabled
for
YouTube
and
by
the
most
recent
estimates
covers
around
or
is
used
by
around
40
of
Downstream
internet
traffic.
B
Today,
bbr
is
still
under
ongoing
development,
which
is
demonstrated
by
the
release
of
BPR
version
2
in
2019
and
I
learned
just
this
week
that
PBR
version
3
is
now
very
much
on
its
way.
This
ongoing
development
of
bbr
has
partially
been
driven
by
the
research
Community,
which
has
contributed
to
the
understanding
of
PBR
over
the
years
in
various
ways.
B
However,
this
previous
research
leaves
open
some
important
questions
and
these
open
questions
have
to
be
have
to
do
with
the
approaches
that
were
taken
by
this
previous
research
and
these
approaches
come
in
two
broad
categories.
The
first
approach
in
previous
research
are
experimental
evaluations
where
the
actual
bbr
implementation
is
tested
in
real
physical
network
settings
now
to
gain
General
insights
from
these
experimental
evaluations,
either
PBR
implementation
implementation
has
to
be
tested
in
a
wide
variety
of
network
settings,
and
in
this
regard,
the
issue
with
experimental
evaluation
is
their
scale
dependent
costs.
B
That's
where
our
models
come
in,
so
in
general,
contextual
control,
models,
try
to
mathematically,
describe
the
behavior
of
a
contest,
control
algorithm
and
candles
predict
the
algorithm's
performance
in
yet
unseen
network
settings.
However,
so
far,
only
steady
state
models
of
PBR
have
been
proposed,
so
models
that
describe
the
behavior
of
bbr
in
its
static
phase
after
convergence.
B
These
models
do
not
have
a
notion
of
time
and
therefore
cannot
express
transient
effects,
for
example
such
as
the
convergence
itself.
However,
understanding
this
conversion
is
actually
actually
really
important,
because
the
steady
states
in
the
steady
state
models
are
only
relevant
for
performance
if
the
algorithm
can
actually
be
shown
to
converge
to
these
steady
States
in
our
work,
we
try
to
fill
these
gaps
in
previous
research
with
a
fluid
model
in
context
control
research,
a
fluid
model
describes
a
system
of
ordinary
differential
equations
that
it
describes
two
pieces
of
joint
Dynamics.
B
The
second
piece
of
Dynamics
in
fluid
models
covers
the
response
of
network
metrics
to
the
sending
rates
chosen
by
the
contrast,
control,
algorithms
and,
for
example,
this
differential
equations
describe
the
evolution
of
the
Q
length
over
time
and
with
this
approach
based
on
differential
equations,
fluid
models
have
a
number
of
advantages.
First,
they
enable
efficient
evaluation
as
opposed
to
experiments,
because
the
differential
equations
in
the
fluid
models
can
actually
be
efficiently
solved
for
a
wide
range
of
network
scenarios.
B
To
illustrate
this
note
that
traffic
flows
in
fluid
models
are
modeled
as
fluids
and
therefore
do
not
have
a
notion
of
packets,
and
that
means
that
simulating
a
large
traffic
flow
is
actually
not
more
computationally
costly
than
simulating
a
small
traffic
flow,
because
this
boils
just
boils
down
to
larger
numbers
in
the
same
calculations.
B
The
second
Advantage
is
that
fluid
models,
in
fact,
can
express
transient
effects
as
opposed
to
steady
state
models
and
thereby
allowed
to
investigate
convergence
Behavior.
Even
more
importantly,
fluid
models
allow
to
rigorously
prove
whether
an
algorithm
converges
to
a
steady
state
by
using
methods
from
theoretical
stability,
analysis
and
control
theory.
B
So
that's
exactly
what
our
work
is
about,
so
in
our
work
we
construct
a
fluid
model
that
reflects
PBR
behavior
and
to
make
that
work
we
had
actually
had
to
design
some
new
modeling
techniques.
I
will
explain
that
afterwards,
we
didn't
only
propose
the
fluid
model,
but
we
also
validated
it
with
measurements,
from
actual
experiments
and
in
the
process
of
this
validation.
B
We
could
confirm
previous
insights
into
PBR
performance,
but
also
generate
new
insights,
and
finally,
we
could
also
apply
this
theoretical
stability
analysis
to
our
fluid
model
and
thereby
prove
that
bbr
in
fact
converges
or
more
technically
that
the
equilibria
of
the
PBR
Dynamics
are
asymptotically
stable.
Now
we
go
through
each
of
these
contributions,
one
by
one
and
I
will
start
with
the
design
of
the
fluid
model.
Now
I
mentioned
that
we
had
to
design
some
new
techniques
to
make
to
reflect
PBR
Behavior
with
differential
equations
and
to
explain
why
that
is.
B
I
will
first
illustrate
how
congestion
control
functionality
has
traditionally
been
represented
in
fluid
models,
and
I
will
do
so
at
the
example
of
the
traditional
Reno
algorithm.
Now,
as
we
all
know,
at
the
basis
of
Reno,
there
is
a
quite
simple
control.
Loop
this
control
Loop
maintains
a
contest
window
of
let's
say
size.
W
here
even
acknowledgment
is
received.
This
congestion
window
size
is
increased
by
one
segment
divided
by
the
congestion
window
size.
This
translates
to
to
an
increase
by
one
segment
over
roughly
one
one
round
trip
time.
B
Otherwise,
if
a
packet
does
occurs,
the
congestion
window
size
is
cut
in
half
now
this
control
Loop
is
modeled
as
follows:
in
the
classic
fluid
model
by
Stephen
Lowe.
So,
first,
the
sending
rate
X
at
any
point
in
time
is
given
by
the
congestion
window
size
at
any
point
in
time,
divided
by
the
delay
Tau
at
this
point
in
time.
B
Second,
the
congestion
window
size
itself
is
described
in
its
evolution
by
this
differential
equation.
Here
now,
it's
a
differential
equation.
That's
why
there's
a
DOT
above
the
variable
on
the
left
hand,
side
and
this
differential
equation-
gives
the
change
in
the
congestion
window
size
at
any
point
in
time,
T
so
the
right
hand,
side
of
this
differential
equation
is
quite
complex.
So,
let's
go
through
it
one
by
one.
B
The
first
factor
in
this
first
term
here
describes
the
share
of
non-loss
traffic.
So
p
is
a
function
that
gives
the
loss
rated
time.
T
and
P
of
T
minus
Tau
gives
the
loss
rate
one
rtt
a
goal,
and
one
minus
this
loss
rate
is
the
share
of
traffic
that
was
not
lost
on
r2t
Diego.
Multiplying
this
with
the
sending
rate
X
of
1
RT
ago.
We
get
the
rate
of
incoming
X
acknowledgments
currently,
and
then
we
multiply
this
with
the
congestion
window.
B
Size
increase
upon
received
act,
which
then
gives
us
the
aggregate
rate
of
contest
window
size
increase.
Now.
Turning
to
the
second,
the
second
term,
we
again
have
the
loss
rate
of
one
RTD
ago.
We
multiply
this
with
the
sending
rate
of
one
rdt
ago,
and
we
again
multiply
this
with
the
congestion
window
cut
upon
loss
and
thereby
we
get
the
aggregate
rate
of
congestion
window
side
decrease.
B
Now
this
difference
between
the
aggregate
increase
and
the
aggregate
decrease
is
undoubtedly
an
approximation
of
actual
renal
Behavior.
But
surprisingly,
we
find
that
this,
this
renal
fluid
model
reflects
actual
renal
Behavior.
Quite
well,
especially
when
we
couple
it
with
our
Network
fluid
model.
B
However,
in
summary,
we
see
here
that
there's
a
single
fundamental
variable,
namely
the
congestion
window,
size
W
and
this
congestion
window
size
is
adopt,
is
adapted
in
response
to
loss
and
that's
actually
not
sufficient
to
reflect
bbr
behavior
and
to
see
why
I'll
focus
on
a
small
but
important
part
of
PVR
functionality,
namely
the
bandwidth
probing
in
bbr
version.
One
and
I
will
focus
on
how
this
bandwidth
probing
adapts.
B
The
sending
rate
of
a
PBR
flow
over
time
so
fundamentally
PBR
splits
time
into
probing
periods
where
each
probing
period
consists
of
eight
phases
and
each
phase
has
the
duration
of
the
minimum
measured
rtt.
So
this
mean
rtt
here
generally
in
each
of
these
eight
phases,
the
PBR
flow
sends
at
the
bottleneck
the
sense
of
the
rate
that
corresponds
to
the
bottleneck,
bandwidth
estimate
denoted
by
P.
Here,
however,
in
one
random
phase,
the
rate
is
raised
by
25
to
discover
whether
more
bandwidth
is
available
in
the
subsequent
phase.
B
This
rate
is
then
reduced
by
25
compared
to
the
base
rate
to
train
any
cues
that
may
have
been
built
up
before,
as
this
probing
goes
on
and
the
delivery
rate
is
observed.
So
the
rate
of
incoming
acknowledgments
and
this
delivery
rate
should
roughly
correspond
to
the
ascending
rate
of
one
or
the
Diego.
If
there
is
no
congestion
and
the
maximum
of
this
delivery
rate
is
then
adopted
as
the
new
bottleneck
bandwidth
estimate
denoted
by
B
Prime
here,
and
this
probing
period
probing
process
repeats
in
the
next
period.
B
Specifically,
we
have
to
construct
of
differential
equations
such
that
we
can
represent
these
probing
pulses
such
that
we
can
insert
these
probing
poses
into
random
phases
such
that
the
maximum
delivery
rate
is
tracked
over
time
and
such
that
the
bottleneck
bandwidth
estimate
is
periodically
adjusted
and
on
the
next
few,
slides
I
will
explain
how
we
achieve
this
with
mathematical
functions.
B
So,
let's
start
with
the
probing
poses
in
order
to
represent
these
probing
poses
with
differential
equations.
We
have
to
get
a
bit
inventive
with
mathematical
functions
for
our
purpose.
We
can
use
the
well-known
sigmoid
function,
which
has
this
formula
on
the
left
side
and
increases
from
0
to
1
for
an
argument
around
zero.
We
use
this
sigmoid
function
to
create
a
pulse
function
now,
for
the
poses.
Remember
that
BPR
splits
time
into
phases
where
each
phase
has
the
duration
of
the
minimum
measured
round
trip
time
denoted
by
Tau
mean
here
now.
B
Let's
say
we
want
to
insert
a
pulse
into
phase
one.
So
to
do
that,
we
take
a
sigmoid
function
that
increases
at
the
start
of
phase
one.
We
take
another
sigmoid
function
that
decreases
at
the
end
of
phase
one,
and
we
multiply
these
sigmoid
functions
together,
pulse
that
covers
phase
one
now
with
such
a
pulls,
we
can
augment
the
pacing
rate
of
any
bbr
flow.
To
that
end,
we
start
with
the
bottleneck
bandwidth
estimate
denoted
by
xbdl
here,
and
this
is
this
forms
the
base
and
then
we
add
a
25
boost
to
it.
B
In
one
phase,
And
subscribe,
subtract
a
25
pulls
from
it
in
the
next
phase
and
like
this
we
have
represented
these
probing
poses
with
differential
equations.
Now
remember
that
these
probing
poses
have
to
be
inserted
into
random
phases.
For
example,
if
we
have
the
pacing
rate
of
flow
I
here
this,
this
pacing
rate
has
an
upwards
pulse
in
Phase
Phi
I,
which
is
one
here
in
this
example,
and
a
downwards
pose
in
Phase
5,
I
plus
1,
which
is
two
in
this
example.
B
B
The
fluid
model
we
can
achieve
this
goal
of
this
synchronization
with
a
simple
trick,
so
we
just
assign
a
natural
number
as
a
flow
ID
to
the
competing
flows
and
then
we'll
calculate
this
flow.
Id
modulo
7
to
get
the
phase
Phi
I
in
which
flow
I
has
the
upwards
pulls.
So
if
we
were
to
simulate
10
flows,
then
flow
one
would
have
its
upwards,
both
in
Phase
One
flow,
seven
would
have
its
upwards
pose
in
Phase,
zero
and
so
on.
B
So
the
natural
number
is
calculated
by
by
is
computer
modulo
7,
although
there
are
eight
phases
in
a
probing
period,
but
this
is
because
the
upwards
pulses
is
actually
never
inserted
into
the
last
phase
of
a
bbr
probing
period,
and
this
trick
of
desynchronization
actually
achieves
this
desynchronization,
as
is
evidenced
here
by
the
pacing
rates
of
flow,
one
flow
7
and
Flow
9..
B
So
the
the
next
bit
of
challenging
bbr
functionality
is
to
track
the
maximum
delivery
rate
over
time
and
to
achieve
this,
we
again
rely
on
the
sigmoid
function,
increasing
from
zero
to
one
for
an
argument
around
zero.
But
this
time
we
multiply
this
sigmoid
function
by
a
linear
function
to
get
this
function.
Gamma
here
this
function,
gamma
yields
0
for
an
argument
below
zero
and
yields
the
identity
for
an
argument
above
zero.
B
So
this
approximates
what
is
commonly
known
as
the
rectified
linear
unit,
and
we
can
use
this
function
gamma
to
construct
a
differential
equation
and
that
that
reflects
this
evolution
of
the
maximum
delivery
rate
over
time.
So,
specifically,
if
we
have
an
arbitrary
evolution
of
the
delivery
rate
here
in
blue,
this
gamma
will
cause
the
current
maximum
to
be
adjusted
upwards
to
the
delivery
rate.
If
the
deliberate
is
above
the
maximum
but
will
preserve
the
current
maximum.
Otherwise,
so
with
this,
we
kind
of
contract,
this
maximum
deliver
rate
with
pure
differential
equations.
B
Finally,
so
bear
with
me
this
is
the
last
slide
of
maps,
so
this
this
Max,
this
maximum
tracking,
has
then
to
be
eventually
adopted
as
the
new
bottleneck
bandwidth
estimate,
and
to
do
that,
we
once
more
rely
on
the
sigmoid
function
and
we
once
more
create
a
pulse.
However,
this
time
we
don't
create
a
post
that
covers
an
entire
face,
but
we
just
cover.
We
just
create
a
post
that
covers
the
really
last
few
milliseconds
of
of
the
entire
probing
period.
B
So
here,
if
you
have
the
the
la
the
second,
the
last
four
phases
of
a
probing
period,
we
can
create
a
pose
at
a
period
And
by
using
a
sigmoid
function
that
increases
shortly
before
the
period
and
and
another
sigmoid
function.
That
decreases
right
at
the
period
end
and
we
multiply
them
to
get
this
pulls,
and
then
we
can
use
these
pulls
in
a
in
another
differential
equation
and
that
adjusts
that
takes
basically
the
the
maximum
delivery
rate
that
has
been
tracked
over
time
and
adjusts.
B
So
in
this
experimental
validation,
we
validated
our
food
model
under
a
variety
of
network
settings
and
on
a
variety
of
configurations.
So
each
configuration
consists
of
a
single
bottleneck,
topology
and
the
combination
of
context
control
algorithms.
So
we
test
both
homogeneous
combinations
where
all
competing
flows
adopt
the
same
contraction,
control
algorithm
and
we
also
test
heterogeneous,
balanced
combinations
where
each
contest
control
algorithm
is
adopted
by
the
same
number
of
competing
flows.
B
For
each
of
these
configuration
we'll
run
two
evaluation
tools:
one
are
a
simulator
that
is
based
on
our
fluid
model,
so
that
basically
solves
the
differential
equations
in
the
fluid
model
and
an
experiment
environment
that
is
based
on
the
mininet
network.
Evaluator,
from
these
Evolution
tools,
we
get
results
and
we
compare
these
results
to
conduct
the
validation
we
conduct
two
different
types
of
validation.
One
is
a
trace
validation
that
looks
at
the
evolution
of
network
metrics
over
time.
B
This
is
the
plot
that
I've
just
shown,
and
we
also
conduct
an
aggregate
result,
validation
which
I
will
talk
about
now.
So
importantly,
in
this
validation,
we
do
not
only
find
that
our
fluid
model
is
very
accurate,
but
we
can
also
confirm
previous
insights
into
PBR
performance
and
generate
new
insights.
B
So
to
start
with
a
previous
insights,
our
fluid
model
correctly
predicts
that
bbr
version
1
is
quite
unfair
towards
loss-based
congestion,
control,
algorithms
in
Shallow
buffers-
and
this
is
demonstrated
by
this
plot
down
here.
So
on
the
left.
We
have
the
predictions
by
the
fluid
model
and
on
the
right
we
have
the
experiment
results
from
the
mininet
network
evaluator
and
on
the
x-axis.
We
have
the
buffer
size
of
the
bottleneck
in
multiples
of
the
path
bandwidth
delay
product
and
on
the
y-axis.
B
We
have
the
chain
fairness
index,
which
would
be
one
for
perfect
fairness
and
zero
if
a
single
flow
obtained
all
the
bandwidth-
and
here
we
see
that
our
fluid
model
correctly
predicts
that
up
to
a
buffer
size
of
four
bandwidth
for
four
bandwidth
delay
products,
the
bbr
flows
actually
obtain
almost
the
complete
bandwidth
when
they
compete
with
cubic
flows
or
Reno
flows.
B
So
our
fluid
model
really
was
the
first
tool
to
deliver
this
insight
and
to
go
to
another
previous
insights.
Our
fluid
model
correctly
predicts
that
bbr
version
one
also
leads
to
high
loss
in
cello
buffers.
That's
actually
the
root
cause
of
the
unfairness
of
PB
version,
one
that
we
have.
This
discovered
that
we
have
discussed
before
and
the
fluid
model
correctly
predicts
that
bbr
version
2
leads
to
very
little
loss
like
loss
on
the
order
of
that
is
caused
by
the
traditional
loss-based
contraction
control,
algorithms.
B
So,
for
a
new
insight,
our
fluid
models
identifies
this
buffering
Behavior
here
of
PBR
version
2,
which
is
previously
undocumented.
So
here
we
see
that
if
the
buffer
size
grows,
the
buff
utilization
by
Biblia
version,
2
first
decreases,
but
then
increases
again,
and
this
came
to
KMS
as
a
price
to
us.
So
why
did
we
observe
this
effect
to
see
why?
B
However,
this
in-flight
increase
is
stopped
early
if
excessive
loss
is
detected
or
if
the
in-flight
High
Mark
is
hit,
and
this
in-flight
High
Mark
is
maybe
set
in
the
startup
phase
of
the
PBR
version.
2
flow,
but
also
only
if
excessive
loss
is
attacked
is
detected
now.
The
key
Insight
here
is
that
in
large
buffers,
barely
any
loss
occurs
and
that's
why
large
buffers
actually
disable
these
low
space
safeguards
in
bbr
version
2.
as
a
result,
and
this
in-flight
increase
never
stops
early.
So
that
means
PBR
version,
2
probes,
more
aggressively
in
large
buffers.
B
That's
why
it
measures
a
higher
delivery
rate.
This
measurement
of
the
delivery
rate
enters
into
the
estimate
of
the
bandwidth
delay
product,
and
that
means
that
bbr
version
2
keeps
even
more
data
in
Flight,
which
then
leads
to
a
higher
buff
utilization,
and
that's
why
we
observe
this
U-shaped
curve.
So
a
steady
state
model
would
actually
have
difficulty
to
predict
this
effect
because
it's
an
effect
it
unfolds
Over
time.
However,
our
fluid
model,
since
its
Dynamic
actually
can
reproduce
this
Dynamic
effect.
B
So
one
category
of
insights
that
I
just
discussed
came
from
this
experimental
validation.
Another
category
of
insights
that
we
had
came
from
our
theoretical
stability
analysis.
So
what
do
I
exactly
mean
by
a
theoretical
stability
analysis
in
this
in
a
theoretical
stability
analysis,
we
started
with
the
fluid
model
that
I've
talked
about
now,
and
this
fluid
model
can
be
used
for
simulation
and
can
be,
can
also
reflect
the
small
scale
features
of
the
sending
rate
Evolution
bbr,
so
the
the
probing
poses
that
we
see
here
and
descending
rate
curves.
B
Then,
with
respect
to
this
reduced
fluid
model,
we
find
the
equilibria.
So
the
steady
states,
which
here
are
a
combination
of
sending
rate
distribution
and
the
Q
length
from
which
the
sending
rate
Dynamics
do
not
deviate
anymore,
so
combinations
which
are
preserved
by
the
sending
rate
Dynamics.
And
lastly,
we
can
then
investigate
this
equilibrium
with
respect
to
their
asymptotic
stability.
If
these
equilibrium
are
asymptotically
stable,
that
means
that
bbr,
in
fact
converges
to
these
steady
States
and
we
can
approve
this
by
means
of
tools
from
stability
analysis
such
as
the
lyaponov
method.
B
So
that's
how
the
stability
analysis
works,
so,
let's
see
what
results
it
generated.
So,
in
our
stability
analysis,
we
investigated
different
types
of
equilibrium.
We
distinguish
PBR
version
one
and
by
version
two
and
for
bbr
version
one.
We
also
distinguish
deep
buffers
and
shallow
buffers
for
PBR
version,
1
equilibria
in
deep
buffers.
We
find
that
these
equilibrium
are
not
unique.
So
that
means
there
are
multiple
possible
sending
rate
distribution
to
which
PBR
might
converge
even
in
the
same
configurations
and
crucially,
not
all
of
the
sending
rate
distributions
are
necessarily
Fair.
B
The
VPR
can
converge
to
unfair
equilibrium.
However,
some
of
these
same
grade
displays
are
actually
fair,
so
fairness,
Fair
equilibrium
is
still
possible.
Also
positively.
These
pbr1
equilibria
in
deep
buffers
do
not
involve
persistent
packet
loss
and
there
are
symptomatically
stable.
So
we
actually
have
a
mathematical
guarantee
that
PBR
converges
eventually,
then,
for
shallow
buffers,
we
find
different
equilibrium
properties.
Interestingly,
so
here
we
find
that
equilibrium
are
in
fact
unique.
They
are
guaranteed
to
be
fair
and
they're,
also
stable.
B
However,
they
do
involve
persistent
packet
loss
and
for
BB
version
2.
Finally,
we
find
more
or
less
the
same
properties
as
for
PBR
version,
one
equilibrium
deep
buffers
with
one
small
change,
so
be
fairness
in
BB
version,
2
actually,
but
is
actually
guaranteed
under
the
same
round
trip
time
for
all
flows
and
the
same
guarantee.
The
same
conditions
do
not
guarantee
fairness
for
bbr
version
button,
so
there's
kind
of
improvement
in
that
respect.
So
I
I
would
like
to
note
here
that
in
all
the
investigated
cases
we
could
confirm
the
stability
of
PBR.
B
So
with
this
I've
provided
an
overview
of
all
our
contributions
in
the
paper
and
I
would
like
to
make
some
concluding
remarks
and
I
would
like
to
split
this
or
concluding
remarks
in
two
parts,
one
relating
to
fluid
models
and
one
two
relating
to
PBR
and
congestion
control
in
general.
B
So
in
terms
of
fluid
models,
I
think
our
work
shows
that
fluid
models
predict
congestion,
control,
Behavior
with
surprising
accuracy.
This
accuracy
is
qualitative,
so
it
allows
to
kind
of
rank
and
contraction
control,
algorithms
with
respect
to
certain
metrics,
and
it's
also
quantitative,
quantitatively
accurate.
So
it
gives
it
predicts
with
quite
high
accuracy.
For
example,
how
lost
will
the
how
high
will
the
loss
rate
be
and
therefore,
in
our
premium,
fluid
models
are
a
valuable
complement
to
experiments
and
steady
state
models,
so
the
other
two
methods
that
have
traditionally
been
used
in
bbr
analysis.
B
We
do
not
think
of
fluid
models
as
a
replacement
of
these
methods
in
any
way,
because
experiments
are
still
the
gold
standard
in
in
evaluation
and
steady
state
models
are
much
easier
to
work
with.
Theoretically,
however,
this
combination
of
these
three
methods,
which
actually
quite
a
powerful
tool
set
to
analyze
congestion
control,
algorithms
and
what
we
also
see
a
possibility
for
fluid
models
to
support
eventual
standardization
efforts.
So
if
PBR
was
were
to
be
ever
standardized,
a
fluid
models,
analysis
could
help
to
recommend
some
parameters.
B
B
Now
turning
to
PBR
and
contest
control,
I
think
the
overall
verdict
of
our
fluid
model,
driven
PBR
analysis
is
that
dbr
version
2
remains
an
incomplete
improvement
over
bbr
version,
one
so
PBR
version
2
eliminates
the
worst
fairness
and
loss
issues
of
BB
version
one.
However,
it
still
leads
to
some
buff
queuing
behavior
in
in
other.
In
other
cases,
that
might
not
be
desirable
and
I
think
these
tenacious
performance
issues
just
indicate
that
internet
congestion
control
is
just
really
hard
to
get
right
and
I
would
like
to
note
this
point
as
exactly.
B
This
difficulty
has
recently
motivated
some
proposals
to
support
congestion
control
algorithms
with
resource
allocation
mechanisms
that
run
in
the
network.
So
there
have
been
proposals
for
congestion
shares
at
hot
Nets
three
years
ago,
or
there's
also
the
proposal
for
bandwidth
reservation
in
Scion
that
my
colleagues
at
dth
Zurich
work
on.
B
However,
I
would
expect
that
these
proposals
are
at
least
somewhat
controversial
and,
of
course,
these
proposal
also
will
have
a
long
way
to
go
to
be
practically
usable
in
the
internet,
and
so,
at
least
in
the
short
term.
The
efficiency,
fairness
and
stability
of
Internet
congestion
control
remains
an
important
research
objective
and
hopefully
our
fluid
model
can
help
achieve
this
goal.
So
with
that
I've
arrived
at
the
end
of
my
talk,
I'm
happy
to
take
questions
and
I.
Thank
you
all
for
your
attention.
A
D
Hey
Senator
Westfall
with
a
future
way.
My
question
was
I.
I,
don't
see
in
in
the
model
where
you
have
the
buffer
drug
Behavior,
like
you
said
or
red,
is,
is
better
than
drop
dial
or
the
other
way
whatever.
But
what
did
that
come
into
the
model.
B
So
yeah
here
so
this
actually
gives
a
brief
overview
of
our
complete
fluid
model,
so
our
fluid
model
actually
has
more
or
less
two
parts
on
the
highest
level
structure.
One
is
the
network
model
that
describes
basically
the
response
of
the
networks.
That
is
the
leftmost
column
here
and
they
are
in
the
in
the
differential
equation
that
captures
the
queuing
behavior.
We
can
distinguish
between
drop
tail
and
random
early
drop.
D
And
another
question
as
well
was
the
do
you
use
like
you,
have
more
advanced
way
of
of
modeling
the
Dynamics
of
of
the
queue.
Then
the
previous
TCP
models.
C
B
Or
so
for
the
network,
part
of
the
model,
most
of
this
kind
of
network
response
modeling-
has
been
done
in
previous
work,
but
we
refined
this
model
in
a
few
ways
that
actually
applying
allowed
us
to
obtain
more
accurate
results,
but
for
the
contraction
control,
algorithm
models
themselves,
and
we
just
use
previous
work.
This
has
been
done
before.
C
B
Yeah,
so
it's
definitely
not
in
the
model
yet,
but
I
think
the
model
already
has
some
some
places
where,
like
the
the
rate,
is
constrained
more
most
prominently.
For
example,
the
the
sending
rate
is
constrained
by
the
congestion
window
of
bbr,
and
this
could
like
the
application
limit,
could
just
be
accommodated
as
another
constraint,
but
so
far
we
have
not
done
this.
E
Hi
Simon
thanks
for
presenting
this
coming
from
me
as
a
non-mathematician
and
not
really
understanding
the
consequences
that
I
wondered.
If
you
could
tell
us
something
about
the
scalability
of
it,
I
saw
you
showing
in
a
real
life,
Experiment
three
flows,
comparing
to
three
flows
that
you
did
the
computations
at
points
in
time.
Using
the
fluid
model,
can
it
be
done
for
like
a
million
of
these
flows,
or
is
it
just
computationally
and
feasible?
Are
there
things
that
we
could
answer
if
we
could
scale
it
up.
B
B
It
actually
operate
yeah
so,
for
example,
these
simulation
computations
here
they
were
done
for
10
flows,
and
this
runs
through
very
quickly
and
actually
simulating.
One
flow
is
not
that
expensive.
What
becomes
expensive
if
you
have
a
large
scale,
Network
topology,
and
because
you
then
have
to
kind
of
gather
the
feedback
from
all
the
queues
on
the
path
and
there
it
becomes
expensive
but
I.
Think
with
enough
computing
power,
you
can
still
see
the
results
are
or
the
the
simulation
based
approach
based
on
the
fluid
model
is
still
quite
scalable.
E
Campus
set
of
routers
is
is,
would
you
want
to
have
to
build
the
experiment
to
convince
yourself
that
the
flow
model
still
works,
or
do
your
results
already
show
that
it
will
work
with
larger
Networks.
B
So
I
think
the
the
experimental
validation
that
we
already
did
gives
some
confidence
that
our
fluid
model
yields
reasonable
predictions
and
I
think
what's
what
we
especially
envisioned
for
the
fluid
models
to
do
is
kind
of
deliver
quickly.
Some
first
insights
that
can
then
maybe
investigated
more
closely
with
experiments.
So
but
then
you
don't
have
to
do
the
experiments
for
all
the
variety
of
of
network
configuration.
So
that's
how
we
see
these
two
approaches
play
together.
A
Thank
you
can
I
just
ask:
does
this
model
the
startup
Dynamics
or
is
it
just.
B
Nodes
of
fluid
models
in
congestion,
control,
research,
abstract,
the
startup
and
just
model
the
congestion
avoidance.
But
if
we
wanted
to
model
the
the
startup
phase
or
if
we
wanted
to
investigate
the
startup
phase,
we
basically
have
two
options:
we
either
we
could
just
generate
another
or
construct
another
fluid
model,
trust
for
a
startup
phase
and
then
at
some
point
switch
over
in
the
simulation
or
we
could
just
evaluate
the
fluid
model
under
a
variety
of
initial
conditions.
A
Case
would
be,
can
you
model
bbr
version
three?
What.
B
A
Okay,
good
Arts.
Are
there
any
more
questions.
A
All
right
so
as
I
say,
unfortunately,
the
the
other
Award
winner
is
a
second.
So
you
can't
give
his
talk
today
and
we
will
reschedule
that
talk
for
the
November
meeting.
So
that's
the
the
conclusion
of
this
meeting
again.
Congratulations
to
Simon,
congratulations
to
Shiva!
Who
will
give
his
talk
later
in
the
year
and
thank
you
all
for
your
attention
and
I
will
hopefully
see
you
all
in
Prague.