►
From YouTube: IETF109 IRTF Open
Description
The Internet Research Task Force (IRTF) Open session from the IETF 109 meeting.
A
Okay,
people
are
still
joining
so
we'll
give
it
a
couple
of
minutes.
Can
I
just
check
that
the
audio
is
coming
through
and
everyone
can
see
the
slides.
A
A
Okay,
so
the
the
rate
at
which
people
are
joining
seems
to
be
slowing
down,
so
I
guess
we'll
get
started.
Welcome
everyone
to
the
final
session
of
the
itf
109,
sadly
not
in
bangkok.
I
hope
the
time
zone
for
you
is
is
not
too
painful.
Wherever
you
are,
I
I
realize
it's,
it's
not
great
for
some
of
the
people,
but
I
hope
you
you've
survived
the
week
in
whatever
time
zone
you're
on
and
I'm
looking
forward
to
some
some
interesting
talks.
To
finish
up.
A
So
to
start
up,
we
have
the
the
usual
notes.
Well,
the
intellectual
property
statement,
just
to
a
reminder
that
anything
you
say
in
in
the
mic
line
or
in
the
chat
is
covered
by
the
the
no
well
disclosures.
So
please
pay
attention
to
that
and
I'm
sure
you,
you
know
the
rules
by
now.
A
In
addition,
we
have
a
privacy
policy
and
a
code
of
conduct
be
aware
that
the
meeting
is
being
recorded
and
I
believe
it's
being
streamed,
live
on
youtube.
So
anything
you
you
say,
may
be
public
and
may
be
recorded.
A
In
addition,
we
have
the
code
of
conduct
the
ansi
harassment
policies,
so
please
be
respectful
in
the
discussions.
So
this
is
the
irtf
open
meeting.
The
irtf
is
a
parallel
organization
to
the
ietf
which
focuses
on
longer-term
research
issues
related
to
the
internet.
A
If
you
want
to
find
out
about
what
we're
doing,
the
irtf.org
website
has
pointers
to
all
the
materials
and
all
the
research
groups
we're
on
twitter
we're
on
facebook.
We
have
an
announcement
mailing
list
which
is
pretty
light
traffic,
and
we
also
have
a
discussion
list.
If
you
want
to
talk
about
any
of
the
topics
plus.
Each
of
the
research
group
lists
has
its
updates,
so
research
groups
has
its
own
mailing
list,
jabber
room
and
so
on.
A
Iotf
is
organized
as
a
bunch
of
research
groups.
There
are
14
research
groups
currently
chartered
those
which
are
highlighted
in
in
blue
on
the
slide
met
this
week.
We
also
have
interim
meetings
for
some
of
the
groups
coming
up.
The
information
networking
group
has
a
meeting
scheduled
for,
I
believe,
the
first
of
december,
the
privacy
enhancements
an
assessments.
Research
group
was
also
talking
about
having
a
meeting
in
december
focused
on
ip
address
privacy.
A
If
I
remember
correctly,
I
think
there
was
some
some
talk
about
the
thing
to
think
research
group
also
having
a
meeting,
although
I'm
not
sure
that's
been
confirmed,
so
look
out
for
those
coming
up
over
the
next
the
next
month
or
two
comment
in
the
chat
guy.
I
did
not
meet
this
time
due
to
the
the
conflict
with
the
internet
governance
forum.
So
yes,
sorry
about
that.
That's
a
mistake.
In
the
slide,
we've
also
had
four
recent
rfcs
published
on
the
irtf
stream.
A
A
couple
very
recently,
the
crypto
forum
group
published
an
rfc
talking
about
randomness
improvements
for
security
protocols.
The
information-centric
group
has
an
rfc
looking
at
research,
direct
directions,
information-centric
networking
and
disaster
scenarios,
and
there
are
a
couple
of
earlier
rfcs
earlier
in
the
year
again
from
the
icnrg
looking
at
deployment
considerations,
information,
centering
networking
and
some
some
terminology.
A
A
The
nrp
is
awarded
to
recognize
the
best
recent
results
in
applied
networking
to
recognize
interesting,
new
results
and
new
ideas
of
potentially
of
relevance
to
the
internet
standards
community
and
to
recognize
upcoming
people
that
are
likely
to
have
an
impact
on
internet
standards
and
technologies.
A
In
particular,
we've
got
to
focus
on
bringing
in
people
or
ideas,
though
perhaps
not
otherwise
get
much
exposure
to
the
atf.
The
irtf
community,
perhaps
not
otherwise,
be
able
to
participate
in
the
discussion.
A
The
anrp
works
as
as
an
annual
nomination
cycle,
and
then
the
talks
happen
spread
over
the
the
itf
meetings
in
the
coming
year.
The
nominations
for
the
2021
awards
are
open.
The
deadline
for
nominations
is
the
22nd
of
november,
which
is
this
sunday.
A
So
if
you
know
of
any
any
good
applied,
networking
work
any
interesting
people
any
interesting
new
ideas
that
it
would
be
worth
bringing
into
the
ietf
the
irtf
and
that
you
think,
would
be
a
suitable
for
this
prize.
Then
please
go
to
the
irtf.org
site
at
iotf.org,
a
nrp
and
fill
out
the
nominations,
and
we
encourage
self
nominations.
If
you
think
your
work
has
is
appropriate
for
the
price.
A
The
first
of
these
will
be
from
deborah
penn,
who
will
talk
about
his
work,
designing
network
topologies
for
low
earth
orbit
satellite
constellations.
Georgia
will
be
talking
about
her
work
on
internet
transparency
and,
finally,
renisha
will
be
talking
about
congestion
control
fairness
and
with
that
I'd
like
to
pass
over
briefly
to
matt
ford
from
the
internet
society,
who,
I
think,
wants
to
say
a
few
words
to
congratulate
the
the
winners.
A
C
C
I
know
what
is
a
fairly
unpleasant
hour
for
some
to
to
hear
these
to
hear
these
talks.
So
that's
great
to
see.
I
just
wanted
to
say
that
we
have
had
long-standing
support
from
comcast
and
nbc
universal
to
help
us
in
supporting
the
prize.
If
you
work
for
an
organization
that
thinks
this
is
a
great
initiative
and
would
like
to
see
your
name
and
on
the
list
of
sponsors
for
the
anrp,
please
don't
hesitate
to
get
in
touch
with
me
thanks
very
much.
A
Okay,
thanks
matt,
thanks
again
to
to
comcast
and
nbc
universal
for
sponsoring
this
and
congratulations
to
to
to
the
winners,
and
so
with
that.
The
the
agenda
for
the
rest
of
the
meeting
will
be
the
the
free
prize
winning
talks,
starting
with
deborah
pam,
then
georgia
and
then
renisha.
A
So
the
first
talk,
and
if,
if
me
to
go,
can
get
the
the
the
video
ready.
The
first
talk
will
be
by
deborapan,
we'll
be
talking
about
network
topology
design
at
twenty
seven
thousand
kilometers
per
hour.
A
Deborah
pam
is
fifth
year
phd
candidate,
tth
zurich
and
I
I
understand
he'll
be
on
the
job
market
soon.
So,
if
you
you
like
this
work,
please
chat
to
him
afterwards.
His
work
focuses
primarily
on
low
latency
network
infrastructures,
both
terrestrial
and
satellites.
A
Okay
miteko.
If
you
can
queue
up
the
video.
D
Products
offering
laser
inter
satellite
connectivity
at
10
to
20
g
at
distances
up
to
8
000
kilometers
are
already
available.
One
caveat
is
that
it
takes
a
few
seconds
to
tens
of
seconds
to
establish
this
laser
isos.
This
will
later
be
important
when
we
discuss
the
various
topology
design
challenges.
D
But
what
are
all
these
developments
leading
up
to?
The
primary
goal
is
to
build
mega
constellations
consisting
of
thousands
of
low-flying
satellites
which
provide
global
low-latency
internet
coverage.
Spacex
is
leading
the
front.
Their
project
is
named
starlink
and
they
have
already
deployed
more
than
800
satellites
in
orbit.
D
D
D
D
Another
descriptor
is
orbital
inclination.
Each
orbit
here
has
several
satellites
while
polar
orbits
travels
over
the
poles.
Orbits
can
also
be
inclined.
Inclination
is
the
angle
between
the
orbit
and
the
equator.
Here.
The
inclination
is
53
degrees,
inclined,
orbits,
restrict
satellites
over
densely
populated,
lower
latitudes
for
polar
orbits.
The
inclination
is
90
degrees,
laser
or
radio
links
can
be
used
between
satellites
with
laser,
offering
higher
data
rates.
D
D
We
will
refer
to
this
inflation
over
geodesic,
in
this
case
3x
for
fiber.
As
stretch
we
compared
the
rtds
between
the
eight
most
popular
cities
over
today's
internet
to
the
estimated
rpts
over
a
plastic
constellation
consisting
of
1600
satellites,
the
estimated
rtts
are
70
lower
than
today's
internet
at
the
medium.
D
We
could
also
see
from
the
plot
that
the
latencies
over
satellite
paths
are
very
close
to
the
geodesic
latencies
at
the
speed
of
light
in
vacuum.
Let
us
now
explore
the
system
dynamics
of
these
constellations.
Satellites
move
very
fast,
related
to
the
earth
and
with
respect
to
each
other.
If
a
satellite
is
now
over
eastern
brazil
in
six
minutes,
it
will
already
reach
africa
crossing
the
entire
atlantic
traveling
more
than
500
kilometers
per
minute.
D
Such
highly
dynamic
behavior
translates
to
links
becoming
infeasible
in
minutes.
We
should
keep
this
in
mind
while
exploring
topology
design
and
prioritize
links
which
are
available
for
longer
periods
having
gone
through
the
primer.
Let
us
now
look
at
some
of
the
challenges
that
arise
due
to
the
scale
and
extreme
dynamicity
of
these
mega
constellation
networks.
D
How
do
we
define
the
trajectories
of
the
satellites
and
set
up
inter-satellite
links
in
order
to
serve
the
geographically
distributed
population,
the
various
objectives,
constraints
and
challenges,
and
geographically
varying
demands,
make
topology
design
a
high
dimensional,
optimization
problem?
We
do
not
yet
know
whether
the
starting
constellation
is
the
optimal
solution
to
this.
D
D
D
D
D
D
Note
that
such
delay
due
to
path
latency
and
path
changes,
is
highly
predictable
in
satellite
networks.
Also
frequent
path
changes
result
in
packet,
reordering,
thus
affecting
even
loss-based
condition
control
schemes
having
gone
through
the
general
challenges.
Let
us
quickly
touch
upon
the
utility
of
laser
isos
before
we
start
discussing
topology
design
our
work
at
acm
partners,
2020
discusses
the
benefits
of
having
isos
in
further
details.
D
Despite
the
high
bandwidth
low,
latency
isl
promise,
there
is
substantial
uncertainty
about
whether
or
not
leo
constellations
will
successfully
incorporate
isls
one
huddle
may
be
the
burn
on
reentry
requirement
that
regulators
are
asking
operators
to
satisfy,
thus
not
risking
injury
and
damage
from
the
orbiting
satellites.
In
response,
stalling's
filings
were
amended
to
exclude
mention
of
these
components.
D
Luckily,
leo
networks
do
not
need
isls
to
provide
service.
One
might
think
how
to
achieve
long
distance,
end-to-end
connectivity
when
there
are
no
isos
remember
as
the
constellations
are
being
rolled
out
ground
terminals
would
also
be
rolled
out.
These
terminals
include
user
terminals
on
land
and
those
mounted
on
airplanes
and
ships,
and
also
more
powerful
ground
stations
without
isles
connections
between
far
separated
ground
terminals,
bounce
up
and
down
between
satellites
and
on
path
terminals
yielding
a
bent
pipe
or
bp
connectivity.
D
D
In-Depth
quantitative
analysis
of
these
downsides
are
in
the
paper
in
case
you
are
interested,
but
to
give
some
brief
insight,
let
us
touch
upon
temporal
latency
variations
and
impact
of
weather
with
bp.
The
path
between
massio
in
brazil
and
durban
in
south
africa,
for
example,
sees
an
inflation
of
100
milliseconds
over
time.
D
This
is
because
the
density
of
air
traffic
is
much
faster
over
the
south
atlantic
than
the
north.
Hence
the
park
often
ends
up
using
aircraft
flying
over
the
north
atlantic.
As
intermediate
hops
note
that
this
behavior
not
only
inflates
the
rtt
of
this
path
significantly,
but
also
makes
the
heavily
used
parts
over
the
north
atlantic
even
more
congested
by
virtue
of
inter-satellite
links
in
a
dense
deployment
of
satellites,
such
latency
variations
get
reduced
by
as
much
as
80.
D
The
impact
of
weather
would
be
significantly
higher
on
bp
connectivity
than
on
isl
connectivity
for
paths
consisting
of
isos.
This
value
is
either
the
first
or
last
stop
attenuation,
whichever
is
worse,
but
for
bp
paths.
This
is
the
worst
attenuation
seen
across
all
links
of
the
zigzag
path
bouncing
between
the
satellites
and
gt.
D
D
D
In
contrast,
thanks
to
the
narrow,
beams
and
negligible
interference
issues
inter-satellite
laser
connectivity
is
unlicensed.
Gso
satellites
fly
above
the
equator
and
operate
in
the
same
frequency.
Bands
sought
for
leo
communication,
thus
leo
satellites
when
crossing
the
lower
latitudes
near
the
equator,
must
avoid
interference
with
gso
satellites,
while
gt
field
of
view
is
already
restricted
by
the
minimum
angle
of
elevation.
Gso
arc
avoidance
makes
this
field
of
view
even
more
restricted.
D
D
D
Such
analysis
is
also
relevant
for
constellations
like
one
way
which
do
not
plan
to
incorporate
laser
isos,
okay,
having
gone
through
general
challenges
and
the
utility
of
having
isles.
We
come
back
to
the
problem
at
hand.
So
how
can
we
connect
satellites
in
order
to
offer
higher
aggregate
bandwidth,
keeping
in
mind
the
dynamics
of
the
system?
D
D
The
key
constraint
of
this
system
is
system
dynamics,
satellites
flying
fast
relative
to
the
earth
and
to
each
other,
but
we
have
other
constraints
as
follows:
setting
up
links
can
take
a
few
seconds
to
tens
of
seconds
during
which
time
the
involved
communication
units
cannot
transfer
data.
Hence
it's
important
to
minimize
changes
in
which
links
are
connected.
D
D
I'll
also
lay
out
the
assumptions
we
are
making
in
solving
the
inter-satellite
topology
design
problem.
We
assume
that
the
satellite
trajectories
are
given.
We
rely
on
fcc
filings
by
spacex
and
amazon
that
describe
their
trajectories,
while
trajectory
design
in
itself
is
an
interesting
question,
we
have
not
explored
it
in
this
work.
D
D
We
treat
plus
quick
connectivity,
which
I
already
touched
upon
earlier,
as
our
baseline
on
the
last
point
plus
grid
being
the
baseline.
What
are
we
trying
to
improve?
One
drawback
of
these
very
local
links
to
adjacent
satellites
is
that
one
requires
large
number
of
ops
to
reach
distant
destinations
on
each
hop.
The
end-to-end
communication
consumes
resources
limiting
the
network's
throughput
we'd,
just
like
to
build
networks
that
give
short
paths
but
also
admit
high
capacity.
D
D
This
is
the
atmospheric
layer
that
extends
to
about
80
kilometers
above
the
earth's
surface,
beyond
which
there
is
no
water
vapor
present.
Given
the
altitude
of
satellites.
In
this
example,
550
kilometers
using
simple
geometry,
we
can
calculate
the
maximum
permissible
iso
length,
that
is
about
5000
kilometers
here.
D
D
D
D
D
D
D
We
also
define
an
aggregate
metric
as
a
linear
combination
of
stretch
and
hop
count
with
alpha
being
the
weight
or
importance
of
stretch
superficially.
This
seems
like
a
traditional
network
topology
design
problem.
These
are
known
to
be
hard
even
in
static
settings,
but
a
variety
of
heuristics
are
used
in
practice.
D
One
key
contribution
of
this
work
is
showing
why
traditional
techniques
don't
work
here.
We
tried
integer
programming,
but
it
suffers
from
two
major
shortcomings.
It
is
limited
in
scalability.
Runtime
is
greater
than
two
days
even
for
25
cities
on
a
beefy
machine,
and
the
estimate
is
roughly
10
to
the
power
of
29
days
for
thousand
cities,
but
even
if
we
manage
to
speed
up
the
ilp
by
a
large
factor
or
use
appropriate
approximation
techniques,
there
is
a
more
fundamental
issue
here.
D
The
set
of
isos
minutes
apart
are
more
than
90
different.
We
should
remember
that
isl
setup
times
are
few
seconds
to
tens
of
seconds.
So
even
if
we
could
solve
the
problem
optimally
for
a
snapshot,
the
set
of
optimal
links
vary
significantly
minute
to
minutes,
hence
to
maintain
optimality.
The
system
would
need
to
break
and
make
a
lot
of
links
continually,
resulting
in
significant
disruption
of
connectivity.
D
D
Regaining
loss
connectivity
is
not
straightforward
and
one
would
have
to
break
and
make
many
links.
Additionally,
random
graphs
are
very
inflexible
and
do
not
allow
the
operator
to
decide
on
the
operating
point.
They
simply
yield
one
combination
of
stretch
and
hop
count
without
any
control
over
which
dimension
to
optimize.
For
so,
where
do
we
go
from
here
this?
D
D
D
Here
we
plot
the
average
hop
count
along
x,
axis
and
average
stretch
along
y
axis
for
the
different
motifs
note
that
both
stretch
and
hop
count
are
weighted
by
the
traffic
matrix.
In
this
case,
population
products
for
ctps
different
motifs
offer
different
trade-offs
in
stretch
and
hub
count,
and
we
can
see
a
pareto
frontier
of
motives
superior
to
the
rest
in
both
hop
count
and
straights.
D
D
D
D
This
plot
quantifies
the
number
of
motif
possibilities
at
different
latitudes
for
polar
as
well
as
inclined
constellations.
The
number
increases
significantly
with
latitude,
for
example,
for
the
inclined
constellation
there
are
around
thousand
options
at
the
equator
versus
around
3500
at
higher
latitudes.
So
how
do
we
exploit
this
characteristic?
D
Consider
a
single
orbital
plane.
A
quadrant
of
this
orbit
consists
of
10
satellites
out
of
the
40
in
total.
In
this
orbit
we
split
the
quadrant
into
multiple
zones.
Three
in
this
case.
First,
we
find
the
optimal
motif
for
the
entire
constellation.
Considering
only
the
motif
choices
available
at
zone
1,
then
keeping
zone
1
links
fixed.
We
remove
the
rest
of
the
links,
we
repeat
the
same
process
for
zone
2
and
then
for
zone
3
and
arrive
at
the
3
zone
multimotif.
D
This
is
the
final
three
zone
multimotive
scheme
for
the
40
squared
constellation.
We
are
looking
at
as
you
can
see
in
the
plot.
The
multimotives
offer
a
higher
performance
beyond
the
single
motif.
Frontier
different
alpha
values
result
in
different
multimotives
and,
as
we
give
higher
priority
to
stretch
the
schemes
move
towards
plus
grid
putting
all
of
this
together.
These
are
the
performance
improvements
above
the
baseline
plus,
quick
connectivity
stalling
sees
more
than
2x
improvement
over
baseline,
while
quipper
also
has
similar
benefits.
D
D
There
are
various
dimensions
to
explore,
including
the
number
of
shells,
orbits,
satellites
per
orbit,
inclination
and
elevation.
We
are
working
on
routing
and
congestion
control
challenges,
as
I
briefly
touched
upon
earlier.
In
this
talk
to
help
bootstrap
research
in
this
area,
we
are
actively
developing
packet
level
and
flow
level
simulators
that
capture
the
necessary
system
dynamics.
A
Okay,
jonathan.
E
Hi
good
morning
does
it
so
when
you
say
rooting
and
congestion
control
is
the
idea
that
you
would
send
traffic
on
a
longer
route
over
less
populated
areas,
just
to
avoid
other
traffic.
D
Yeah,
so
so
that's
a
very
good
question.
What
we
have
seen
is
like,
of
course,
when
it
comes
to
routing
condition,
control
if
we
send
packets
along
just
the
shortest
paths,
it
doesn't
work,
it's
it's
very
inefficient
in
the
sense
that
it
consists
some
areas
of
the
network
heavily.
For
example,
imagine
all
the
traffic
between
north
america
and
europe
using
links
over
over
northern
atlantic,
so
these
links
would
be
heavily
congested.
So,
of
course,
we
will
be
needing
some
sort
of
spreading
traffic
like
k.
D
Shortest
path
is
something
that
comes
to
my
mind
immediately,
which
can
actually
spend
the
traffic
more.
But
beyond
that,
what
is
interesting
when
it
comes
to
into
the
intra
domain.
Routing
is
like
because
the
network
is
moving,
so
this
is
unforeseen
this.
This
has
not
happened
in
the
past,
much
that
the
network
nodes
are
moving
at
incredibly
high
speeds.
So
what
happens
is
like
the
expectation
would
change.
So
as
the
link
moves
away
from
a
set
of
paths,
it
would
start
catering
to
new
paths
as
well
right.
D
So
so,
even
if
you
try
to
kind
of
send
optimal
traffic
via
some
parts,
those
parts
would
see
the
link
utilization,
the
independent,
the
individual
link
utilizations
changing
over
time
as
the
links
scattered
to
a
different
set
of
end-to-end
paths.
So
this
is
also
very
interesting,
and
what
can
be
done
here,
possibly
is
that
given
satellite
trajectories
are
are
known
in
advance
somewhat.
D
So
so
this
this
dynamicity
is
kind
of
predictable
in
advance,
so
that
advanced
knowledge
can
be
applied
in
order
to
shift
traffic
from
links
which
are
going
to
become
more
congested
in
the.
F
Echo
the
comments
of
other
people
in
the
chapter.
This
is
a
really
good
talk.
Think
you
addressed
this
question
in
the
talk,
but
I
want
to,
but
I'm
not
sure
I
fully
caught
the
answer.
So
you
were
talking
at
one
point
about
variability
in
the
path
latency
done
terrestrially,
but
of
course,
that
that
variability
you
the
physical
path
itself,
the
latency
is
actually
fixed.
F
The
variability
comes
from
queuing
delays
and
traffic
competition,
but
I'm
I'm
not
sure
I
caught
how
how
much
you
were
modeling
that
along
the
satellite
path,
because
the
satellite
paths,
I
think,
are
lower
total
bandwidth
and
so
they're
actually
potentially
vulnerable
to
congestion
at
some
point
as
well
right
or
lower
length
bandwidth,
I'm
not
sure
if
the
aggregate
is
actually
lower
or
not.
D
Right
so
so
I
agree
with
that
when
it
comes
to
the
by
that,
I
mean
that
I
agree
that
this
variability
in
case
of
satellites
in
case
of
paths
usually
come
from
queueing
and
congestion,
but
in
case
of
satellites
it
also
come
from
the
fact
that
the
links
are
changing.
The
link
lengths
are
also
changing
continually
and
there
are
often
path
changes
as
well.
So
when
it
comes
to
the
topology
design
work,
that
was
the
main
focus
of
this
talk.
D
We,
we
were
sticking
to
the
path
length,
so
this
is
just
a
propagation
delay
without
considering
any
queuing
delay
or
something
when
we
compute
the
latency
and
the
variations
thereof.
The
variation
I
talked
about
was
in
a
more
recent
paper
which
was
published
in
hotness
2020,
but
beyond
that,
so
there
we
kind
of
quantified
what
the
impact
of
foregoing
isle
is
like,
if
you
just
use
main
pipe
connectivity
that
increases
the
variation
of
these
end-to-end
latencies.
D
But
beyond
that,
what
we
are
doing
now
is
like
we
are
trying
to
understand
this
impact
of
variability
when
it
comes
to
end
point
congestion
control.
So
there
are
two
aspects
to
this
number.
One
is
like
the
link
lengths
are
continually
changing,
because,
even
if
the
satellites
which
are
having
intersatellite
links
between
them
are
moving
together
because
of
the
earth's
geometry,
they
come
closer
at
higher
latitudes
and
go
far
away
at
lower
latitudes
near
the
equator.
So
the
link
latencies
are
continuously
changing.
D
That's
number
one
and
number
two
is
like
the
parts
would
change
as
well,
so
as
parts
become
more
and
more
inefficient
after
a
while,
the
end-to-end
traffic
would
need
to
use
a
different
path.
So
these
path
changes
are
also
frequent.
So
these
two
actually
impact
endpoint
congestion
control
heavily.
D
So
so
we
have
to
understand
like
this
is
not
really
condition.
These
are
changes
that
are
expected
like
like
path,
changes
and
path,
latency
changes,
but
this
would
still
impact
congestion
control
if
not
handled
properly.
D
So
so,
when
I
talked
about
path
variations
like
variation
in
latency,
that
was
the
hotness
2020
work
where
we
compared
ben
pipe
connectivity
and
iso
connectivity
right.
So
what
happens?
If
a
constellation
has
isles
versus
what
happens?
It
doesn't
have
isl.
D
So
there
we
have
this
apple
to
apple
comparison
right,
but
when
so,
when
I
was
explaining
the
plot
where
we
compare
internet
latency,
that
is
the
ping
latencies
from
wonder
network
data
to
satellite
latency
there
we
give
some
sort
of
edge
to
the
satellite
network,
so
there,
the
internet
latencies,
of
course,
were
also
affected
because
of
condition
and
stuff.
But
satellite
was
just
propagation
delay.
But
if
you
see
the
difference,
the
difference
is
in
latency.
Endpoint
latency
is
almost
70
at
the
medium.
D
So
even
if
there
is
some
overhead
incurred
due
to
due
to
end
to
end
or
even
link
level
like
modulation,
encoding
and
error,
correction
and
stuff,
that
would
still
that
inflation
would
still
be.
If
we
consider
that
in
inflation,
then
latency
would
still
be
significantly
lower
than
today's
internet.
That's
the
expectation!
Now,
of
course,
I
mean
when
we
bring
in
queueing
and
congestion
into
the
scenario.
Things
will
be
more
interesting,
but
we're
trying
to
simulate
that.
D
So
at
the
moment
we
are
building
a
packet
level
simulator
and
we
already
have
a
paper
out,
so
this
simulator
can
simulate
entire
constellations
and
cross
traffic.
So
there
we
are
also
getting
some
interesting
results
which
arise
due
to
cross
traffic
and
condition
thereof.
A
F
Got
a
couple
of
minutes
so
go
ahead,
so
the
the
topic
that
I
brought
up
in
the
chat
which
you
already
partially
answered
about
about
the
inter-city
traffic
model.
Is
that
model
original
to
you
or
is
there
a
reference
to
that?
I've
downloaded
your
paper,
but
I
didn't
find
a
reference
in
the
paper
yet.
D
Yes,
so
finding
the
right
traffic
matrix
was
difficult
because
you
know
when
it
comes
to
satellite
networks,
we
should
also
address
the
temporal
changes
in
traffic
demands.
So
this
we
didn't
do
we.
We
were
sticking
to
very
simple
traffic
mattresses.
So
that's
the
reason
we
just
picked
this
population
traffic
matrix
and
the
and
the
gdp
traffic
matrix
and
then
stick
to
the
gravity
model,
which
is
which
is
the
population
product
and
and
the
traffic
demand
is
proportional
to
that.
But
but
you
know
I
mean
this
tool.
D
Whatever
we
built
in
the
context
of
this
work,
it's
it
can
be
used
with
any
arbitrary
traffic
matrix.
Of
course,
we
didn't
do
any
like
more
sophisticated
ones
where
we
also
consider
the
temporal
variations,
but
but
these
are
the
traffic
matrices.
These
two
are
the
ones
that
for
that,
for
which
we
have
results.
F
So
the
the
the
product
of
city
population
is
equivalent
to
the
gravity
model,
so.
D
There
is,
there
is
another
variant
of
it
where
we
also
consider
the
distance
between
city
pace
but
and
and
the
product
and
the
demand
is.
It
gets
reduced
as
the
distance
increases,
but
this
is
one
reasonable
traffic
matrix
which
have
been
used
in
the
past.
I
mean
in
couple
of
different
papers.
I
can,
I
can
offline,
send
you
a
couple
of
references.
A
Okay,
great,
thank
you,
excellent
talk
and
a
reminder
to
everybody
that,
as
I
said
at
the
beginning,
that
deb
is
on
the
job
market
soon.
So
thank
you.
Debra
fm
next
up
is
georgia,
who,
whose
talk
is
about
morphing
packet
reports
for
internet
transparency.
A
A
Her
research
focuses
on
enhancing
trust
around
the
performance
of
network
systems
and
its
implications
for
user
anonymity
and
the
privacy
of
networked
topologies,
okay
mitako.
If
you're
ready
for
the
next
video.
G
G
G
Consider
the
scenario
on
this
slide
where
a
broadband
isp
receives
all
traffic.
This
is
video
streaming
traffic
originating
at
netflix
and
other
traffic
through
the
same
in
the
domain
at
some
point,
netflix
streaming,
quality
deteriorates
due
to
large
delays,
begging.
The
question
why
exactly
this
results
in
appearing
dispute,
similar
to
the
one
between
netflix
and
verizon
netflix,
blames
the
isp
for
discriminating
against
netflix
traffic.
While
the
isp
says
I
am
doing
nothing
wrong,
it
just
so
happens
that
the
in
the
domain
link
is
congested.
G
G
G
However,
transparency
comes
at
the
cost
of
anonymity.
In
particular,
popular
unlimited
networks,
like
torque,
fundamentally
rely
on
limited
internet
performance
transparency
in
order
to
achieve
the
learning
the
goals,
and
this
is
because,
at
a
higher
level,
anonymity
relies
on
hiding
the
communication
patterns
of
users.
Whole
transparency
brings
them
into
life.
G
G
We
call
a
flaw,
for
example,
here
alice
talks
to
bob
by
sending
to
him
a
certain
number
of
packets
during
the
first
time
unit,
then
fewer
packets
during
the
second
time
unit,
and
continues
like
that
there
is
these
three
relay
nodes
in
the
middle
or
equivalently.
This
onion
that
provides
the
falling
property
an
adversary
sitting
at
either
position
one
or
position
two
cannot
tell
that
alice
is
starting
to
go.
G
However.
Prior
work
shows
that,
given
enough
location
diversity
of
the
adversary,
she
can
be
anonymized
traffic
in
this
example.
If,
if
the
adversary
is
present
at
both
locations,
1
and
2,
she
can
potentially
tell
with
high
confidence
that
also
starting
to
bog.
This
is
true
when,
if
sees
independent,
down-sounded
versions
of
the
actual
packet
flow
at
the
two
locations,
but
it
is
also
true
if
it
does
not
observe
a
single
flow
application
too,
but
instead,
an
aggregate
which
is
a
composite
of
close,
the
red
and
green
one
here.
G
The
good
news
is
that
in
today's
internet
that
operates
in
the
absence
of
any
transparency
system,
it
is
still
reasonable
to
assume
that
such
a
global,
passive
adversary
is
rare.
Let
us
take
a
step
back.
Stor
is
an
overlay
over
the
network
of
isps
that
make
up
the
internet.
We
argued
that
we
need
more
internet
performance
transparency.
G
So
let
us
explain
what
is
a
transparency
system
and
what
are
its
implications
on
unlimited
building.
Transparency
into
the
internet
fabric
boils
down
to
networks
like
isps
that
publish
information
about
the
traffic
they
observe
at
their
boundaries,
and
then
internet
users,
regulators
and
the
networks
themselves
that
uses
public
information
to
analyze
its
other's
performance.
G
In
particular,
if
an
isp
joins
such
a
transparency
system,
it
would
have
to
deploy
spatial
weakness
logic
where
traffic
enters
and
exits
domain.
You
can
see
that
as
the
orange
nodes
in
the
slide
witnesses
observe
traffic
and
periodically
send
packet
reports
on
traffic
aggregates
to
a
logical,
centralized
entity.
We
call
the
ledger
upon
having
all
these
reports
the
laser
and
anyone
who
has
access
to
it
can
trace
problematic,
behavior,
specific
areas
and
check
for
compliance
with
service
level
agreements
and
neutrality.
G
G
Let's
assume
that
this
flow
is
truly
contained
in
the
red
and
green
aggregate,
now
suppose
that,
concurrent
to
this
aggregate,
the
transparency
system
publishes
two
more
aggregates
the
black
ones.
On
the
slide,
then
the
advertised
goal
reduced
to
determining
which
of
these
aggregates
are
more
likely
to
contain
the
target
flow.
The
adversary
has
the
following
information.
G
G
So
far
from
what
I
have
told
you,
it
seems
that
the
traditional
along
limited
set
size
is
a
metric.
We
are
saying
this
is
true,
except
for
when
the
adversary
has
misleading
information,
for
example,
if
eve
is
pretty
sure
that
an
incorrect
aggregate
includes
a
target
flow,
for
example
the
bottom
one
on
the
slide.
G
In
this
case,
the
traditional
unlimited
set
size
would
give
us
low
anonymity,
the
red
point
on
the
slide,
but
we
are
gluten
limits
to
the
high
being
the
blue
point
of
the
slide,
because
in
this
case,
misleading
information
is
as
if
no
information
our
metric,
which
we
call
the
thin
limit
set
size
addresses
all
this.
You
can
find
more
information
about
it
in
our
paper.
G
G
G
G
G
A
theorem
set
size
of
one
is
the
best
case
scenario
for
the
attacker
who
successfully
traces
the
target
flow
to
the
correct
aggregate.
On
the
other
hand,
a
team
unlimited
size
of
50
reflects
the
worst
case
scenario
for
the
attacker,
because
he
thinks
that
all
50
aggregates
are
equally
likely
to
contain
a
target
flow.
As
a
result,
the
attacker
would
like
for
the
cdf
of
the
tin
limited
set
size
to
lie
towards
the
left
hand
side.
G
While
we
aim
for
the
opposite,
the
orange
curve
shows
the
cdf
of
the
team
limited
size
for
the
case
of
one
second
observation
time
interval.
This
is
simply
the
time
window
during
which
the
adversary
knows
the
pattern
of
the
target
flow
and
all
candidate
aggregates,
as
you
can
see
for
the
case
of
one.
Second,
things
are
pretty
bad
for
the
adversary,
because
now
only
a
few
flows
have
a
fairly
low,
thin
and
limited
size.
G
However,
things
change
as
we
increase
the
observation
time
interval
from
1,
second
to
10,
seconds,
1,
minute
and
finally
10
minutes,
because
now
we
can
see
that
the
adversary
can
more
and
more
reliably
trace
flows,
in
particular
for
the
case
of
10
minutes,
which
corresponds
to
the
green
curve.
Here
there
are
about
20
percent
of
the
flows
that
are
uniquely
identifiable.
G
G
G
G
We've
seen
that
if
we
add
performance
transparency
to
the
internet
and
without
thinking
much
about
it,
there
is
indeed
the
damage
on
the
limit
to
alleviate
these
damage.
We
propose
a
tie
speech
course
in
the
timeline
of
the
package
reports
before
publishing
them
ledger
intuitively.
What
that
means
is
that
corset
time,
regularity,
better
hides
the
flow
patterns,
because
it
results
in
the
loss
of
fine
grained
information
that
would
enable
a
good
pattern
matching
between
flows
and
aggregates.
G
However,
it
is
the
very
same
loss
of
detail
that
impacts
report
utility
because
the
ledger
may
now
not
be
able
to
precisely
localize
in
time.
Certain
events,
for
example,
bears
of
packet
loss
stay
compared
to
additive
noise
methods
like
differential
privacy
is
the
advantage
of
our
method
lies
on
the
relayability
of
the
reports.
It
produces
packet
loss
rates
computed
from
accurate,
yet
coarser
reports
will
also
be
perfectly
accurate,
albeit
average
over
longer
time
internet.
G
G
Now,
on
the
lower
part,
we
see
the
output
of
the
algorithm.
We
say
again
the
same
aggregate,
namely
packet,
counts
over
time,
but
now
observe
that
the
granularity
for
some
of
the
reports
is
coarser
than
a
single
time
unit,
for
example,
for
the
first
packet.
Can
we
see
that
it
refers
to
both
the
first
and
the
second
time
when
merged
the
same
happens
to
the
next
packet
count,
but
then
you
can
see
that
for
the
last
packet
count
it
refers
to
a
single
time
unit.
G
Given
the
course's
granularity,
we
would
be
willing
to
tolerate,
say
two
time
units
the
next
thing
to
do
would
be
uniform
being
and
the
uniform
billing
statically
merge
packet
counts
every
two
time
units.
However,
the
static
beaming
might
not
always
be
the
one
that
hides
the
most
of
low
patterns,
in
which
case
it
also
introduces
unnecessary
noise.
G
You
can
think
of
this
flow
as
a
virtual
flow,
namely
a
composite
of
real
flows,
which
has
a
property
that
clicks
the
most
across
real
flows
for
a
given
being
to
better
explain
what
I
mean.
Let's
go
back
to
the
previous
example
on
the
right.
We
see
again.
Packet
counts
over
time
for
an
aggregate
that
consists
of
the
red
and
green
flow
on
the
left.
We
see
the
liquids
for
each
of
these
clause.
G
G
G
Now
that
have
presented
to
your
solution,
it
is
time
to
evaluate
how
the
solution
improves
flow
non-limit.
The
experimental
setup
stay
the
same
as
before.
The
focus
is
on
the
10-minute
observation
case
that
proved
to
be
challenging
before,
because
the
attacker
could
de-anonymize
a
large
number
of
closure.
G
G
G
G
G
Finally,
you
can
see
how
our
algorithm
scales,
meaning
that
our
algorithm
is
deployable
to
conclude,
if
not
carefully
thought,
transparency
can
greatly
damage
their
anonymity.
We
also
find
that
adding
noise
to
the
traffic
reports
so
as
to
ensure
differential
and
privacy
would
make
the
reports
unusable
in
the
context
of
a
transparency
system.
G
G
G
G
However,
this
time
the
focus
is
not
users
but
networks
themselves,
specifically
who
are
interested
in
hiding
the
topology
of
networks,
because
it
is
possible
to
reverse
engineer.
Topology,
give
an
inter-network
pathways
trivially
exposed
by
transparency
systems,
and
with
that
I
would
like
to
thank
you
and
I'm
now
ready
to
take
any
questions
that
you
may
have.
A
Okay,
thank
you
and
another
excellent
talk.
Thank
you
very
much.
I
see,
though,
there's
a
bunch
of
discussion
in
the
chat
about
multipuff
and
I
see
jonathan
has
joined
the
queue.
I
guess
you
want
to
follow
up
on
that.
E
I
actually
was
going
to
ask
a
different
question:
do
you
consider
in
your
model
an
attacker
who
might
vary
into
pocket
times
like
they
might
try
and
delay
some
packets
and
not
delay?
I
guess
other
packets
to
try
and
increase
the
signal.
G
By
the
flow
itself,
what
you
can
just
do
is
observe
the
packet
flow
somehow
get
access
to.
What
is
what
does
it
look
like
and
then
try
to
find
which
of
these
published
aggregates
that
are
published
at
the
last
hop,
for
example.
Here
this
is
the
scenario
we
consider
is
more
likely
to
contain
it,
so
we
don't
consider
more
sophisticated
attacks
where
the
adversary
can
also
play
around
and
be
active
with
respect
to
how
he
can
attack
the
system.
A
So
I
had
a
question
about
the
the
the
complexity
of
the
adaptive
binning.
Does
this
get
expensive
to
compute
with
very
large
aggregates,
or
is
it
still
manageable.
G
Yes,
so
yes,
the
the
complexity
depends
on
the
number
of
aggregates,
but
not
all
the
number
of
other
gets
published
overall
in
the
internet,
because
this
is
local
to
the
isp.
So,
yes,
it
depends
on
the
number
of
arguments
per
isp,
but
time
interval,
of
course,
but
to
to
reduce
the
complexity
to
and
make
and
make
it
run
in
reasonable
time.
G
What
we
do
is
that
we,
we
split
the
time
into
intervals
around
the
algorithm
independently
for
its
time
interval
so
because
this
is
the
parameter
that
mostly
affects
the
cost
of
the
algorithm.
With
respect
to
that,
not
the
number
of
aggregates.
A
Okay,
thanks
thanks
and
my
other
question
was
how
sensitive
is:
is
this
to
correct
implementation
of
the
the
adapted
binning?
Does
it
degrade
gracefully?
If
people
do
this
slightly
incorrectly
or
does
it
fail
catastrophically.
A
Yeah
or
if
I
guess,
if
the
binning
isn't
quite
implemented
right,
is
it
really
sensitive
to
getting
the
precise
spinning
correctly
or
is
it.
G
To
be
honest,
not
so
much,
there
are
benefits
to
using
this
algorithm
but,
for
example,
we
consider
a
variant
of
the
algorithm
where
the
the
the
isp
would
maybe
statically
merge
every
let's
say,
10
time
units
and
there
are
still
benefits
doing
that
to
having
no
yet
have
no
deployment.
No
I'm
going
to
be
deployed
at
all.
So
it's
not
that
crucial,
but
still
coarsening
the
time
granularity
is
crucial.
Yes,.
G
So
jonathan,
I
think
he's
asking
I
think,
he's
thinking
whether
it
would
be
better
to
use
game
theory
or
all
details.
E
Yeah,
so
it's
just,
I
was
just
thinking
like
if,
because
I
assume
you
also
have
to
measure
traffic
between
isps
and
so
you
could
somehow
have
it.
That
isps
would
also
talk
about
what
happens
between
asses.
E
That's
the
right
grouping,
but
basically
the
idea
would
be
one
isp
lying
makes
other
isps,
look
worse
or
better
or
presumably
worse,
and
so
you
could
have
it
such
that.
E
The
isps
would
be
honest
because
they're
worried
about
their
competitors,
making
them
look
bad
they're,
just
sort
of
has
this
prisoners
dilemma
style
like
we
all
have
to
work
together
or
else
we'll
all
look
bad.
G
Yes,
that's
a
actually
it's
it's
really
great
that
you
brought
it
up.
It's
something
that
I'm
currently
working
on.
Our
current
work
is
exactly
on
the
honest
device
piece
so,
but
this
is
a
bit
of
orthogonal
to
this
paper.
So
this
token
I
talked
about
today,
it
was
about.
Let's
assume
isps
are
honest
and
let's
take
this,
let's
make
this
assumption
and
then
try,
given
that
it
is
intel
in
their
benefit
to
protect
the
privacy
of
their
users.
How
would
they
try
to
fight?
How
would
they
import
these
packets
this?
G
How
do
they
report
in
order
to
fight
who
is
talking
to
whom,
however,
like
what
you
mentioned,
is
really
a
great
topic?
Why
would
the
isps
ever
be
honest
to
us
with
respect
to
what
they
do
with
packets?
Why?
Because,
if
I
am
an
asp,
I
would
rather
say
that
I'm
I'm
doing
fine,
I
don't,
I
don't
lose
in
packets
and
I'm
not
delaying
any
packets
and
blame
my
neighbors.
G
However,
in
our
current
work
we
actually
show
that,
yes,
we
model
that
with
game
theory,
we
are
completely
right
and
we
show
that
there
are
incent,
given
that
users
have
care
about
certain
metrics
like
packet
loss,
for
example,
the
isps
we
form
again
theoretical
against
a
game,
a
theoretical
game
where
I
spaced.
G
G
For
example,
if
you
think
about
fatigue
loss,
let's
say
we
have
two
only
isps
in
the
internet,
we
know
ntns.
We
assume
that
the
ends
know
how
much
packets,
how
many
packets
are
lost.
Then
it
has
to
be
that
either
the
first
or
the
second
is
below
the
packets.
So
in
that
case
it
couldn't
be
that
nobody
lost
them.
So
if
I
yeah,
if
I
have
the
first
isp
and
I'm
losing
the
packets,
then
if
I
claim
that
I
do
not
it's
as
if
implicitly
I'm
claimed,
I'm
blaming
my
neighbor.
G
So,
yes
to
add
to
this,
so
our
ongoing
work
explores
what's
what
kind
of
metrics
we
can
incentivize
the
isps
to
report
honestly
and
we
can
show
that
at
least
packet
loss
mean
packet
loss
and
also
mean
packet,
delay
and
furnace
of
delay.
This
matters
should
we
come
to
expose
in
a
trustworthy.
Thank
you
for
the
question.
A
Okay,
thank
you.
I
don't
see
anyone
else
in
the
queue.
So
thank
you
again
great
great
talk
and
I
guess
we'll
now
we'll
move
on
to
the
final
talk,
which
is
renisha,
we'll
be
talking
about
her
paper
beyond
jane's
fairness
index
setting
the
bar
for
deployments
of
congestion
control.
Algorithms
renishaware
is
a
fourth
year
phd
student
in
computer
science
at
carnegie
mellon
university.
A
H
Hello,
so
today
I'm
really
excited
to
talk
about
this
joint
work
with
matt
who's,
now
nafeli
and
justine
and
trini,
who
are
my
advisors
at
carnegie
mellon
and
my
name
is
ronishaware.
H
So
today
I'm
going
to
talk
to
you
about
this
work
on
congestion
control
deployment.
So
let
me
start
by
motivating
why
we
care
about
this
and
why
you
should
too.
H
Well,
typically,
we
try
to
say
a
new
algorithm.
Taco
is
reasonable
to
deploy
in
the
internet
if
it
fairly
shares
with
widely
deployed
legacy.
Algorithms
pair,
for
example,
here
are
a
bunch
of
graphs
and
tables
all
trying
to
use
fairness
to
make
the
argument
that
their
algorithm
won't
be
too
aggressive
towards
the
status
quo,
which
today
is
typically
regarded
to
be
cubic,
but
here's
the
thing
everyone
falls
short
of
actually
achieving
fair
outcomes,
but
everyone
still
has
to
try
to
make
these
arguments
that
their
algorithm
is
deployable.
H
Pcc
vivace
also
showed
it
could
be
unfair
to
cubic
but
argued
that
as
the
number
of
cubic
centers
increases,
it
achieved.
The
best
fairness
among
new
generation,
algorithms
and
coppa
makes
a
similar
argument
that
it's
much
more
fair
than
bbr
and
pcc
and
it
uses
bandwidth
that
cubic
wasn't
going
to
use
anyway.
H
H
H
H
So
we've
already
seen
that
a
threshold
based
on
fairness
just
is
not
practical
right,
because
no
one
can
actually
achieve
fair
outcomes.
Therefore,
a
good
deployment
threshold
needs
to
be
practical,
in
that
it
should
actually
be
feasible
in
practice
for
a
new
congestion
control
algorithm
to
meet
the
threshold.
So
that's
what
we
mean
by
practical
next
we
say
a
deployment
threshold
needs
to
be
multimetric.
H
So
let's
illustrate
what
we
mean
by
multimetric
through
an
example,
so
assume
so
consider
the
scenario
where,
let's
say
beyonce
is
at
home,
she's
trying
to
talk
to
her
daughter,
blue
ivy,
over
skype
and
over
her
home
wi-fi
access
link-
and
let's
say
her
access
to
the
internet
is
really
slow.
So
her
access
link
is
a
slow
bottleneck
link
here
and
let's
say
it's
using
some
new
algorithm
pair.
H
Some
sorry,
let's
say
it's
using
some
old
legacy:
congestion
contracts
and
pair
now,
let's
say,
base
connection
using
pair,
is
able
to
achieve
about
five
megabits
per
sec,
megabit
per
second
throughput
and
a
good
low
latency.
So
this
is
when
it's
not
sharing
the
bottleneck
here,
it's
just
alone,
but
now,
let's
say
her
husband
jay-z
comes
home
and
he
wants
to
download
the
windows
operating
system
for
some
reason.
So
this
is
going
to
be
a
large
long-running
download
and
the
server
he's
downloading
from
is
using
some
brand
new
congestion
controls
and
taco.
H
H
Therefore,
it's
important
that
a
deployment
threshold
be
able
to
consider
a
variety
of
metrics
beyond
just
throughput.
Some
metrics
beyond
just
throughput
are
becoming
increasingly
important
in
the
internet.
Today,
when
we
have
more
and
more
applications
that
care
about
things
like
latency
or
jitter
or
loss
rate,
however,
we
can
never
really
talk
about
anything
other
than
throughput
when
we're
stuck
talking
about
fairness.
H
So
that's
what
we
mean
by
multimetric.
Now,
let's
walk
through
an
example
to
illustrate
what
we
mean
by
status
quo,
biased
now,
let's
say
beyonce
is
downloading
the
linux
operating
system
from
a
server
using
some
popular
legacy
algorithm
pair.
So
this
algorithm.
Let's
say
it
works
well
in
wi-fi
networks,
so
her
download
speed
is
just
using
all
of
the
available
bandwidth.
H
H
H
H
H
H
So
now,
let's
say
again:
beyonce
is
trying
to
download
this
large
file
and
the
con
and
the
server
uses
some
legacy
algorithm
pair,
but
now,
let's
say
pair,
really
sucks
at
fully
utilizing.
All
of
the
available
bandwidth
in
a
wi-fi
network,
so
she's
only
getting
a
download
speed
of
three
megabits
per
second.
H
So
now,
jay-z
comes
along
trying
to
download
that
windows
os
still
using
some
fancy
new
algorithm
taco.
Now,
let's
say
taco's
much
better
at
utilizing
the
available
bandwidth
in
this
wi-fi
network.
So
but
it's
able
to
do
that
without
hurting
bay's
connection,
so
it's
able
to
just
use
the
rest
of
the
available
bandwidth
and
get
seven
megabits
per
second.
H
So
this
is
what
so,
a
good
deployment
threshold
should
not
penalize
taco
when
pair
already
has
inherently
poor
performance.
This
is
what
we
mean
by
demand
aware
example:
maximum
fairness,
which
accounts
for
the
demand
of
a
flow,
is
demand
aware,
but
equal
rate
fairness,
which
just
says
each
flow,
should
get.
The
same
rate
is
not
so
less.
Lastly,
we
say
a
deployment
threshold
to
be
feature-proof,
so
by
future
proof
we
just
mean
that
a
good
deployment
threshold
should
be
useful
on
a
future
internet
where
none
of
today's
current
congestion
control
items
are
deployed.
H
So
we
care
about
this
property,
because
many
discussions
around
new
tcps
considered
something
called
tcp
friendliness,
so
tcp
friendliness
focuses
on
behaving
just
like
reno
to
be
fair
to
to
reno,
even
though
very
few
center
senders
on
the
internet
probably
even
still
use
reno
anymore.
So
I'm
going
to
give
you
a
little
silly
toy
example.
That
explains
why
it's
really
silly
for
us
to
keep
binding
ourselves
to
reno
so
consider
this
example
where,
in
the
past,
skype
and
a
bunch
of
other
services
use
some
algorithm,
taka
tomato,
but
let's
say
tomato
was
terribly
inefficient.
H
So
today,
skype,
along
with
many
other
services,
switched
to
using
a
much
better
algorithm
and
they're
now
using
pair
now
when
taco
comes
along,
does
taco
need
to
be
nice
to
pear
and
tomato
or
just
pear.
Well,
if
no
one
uses
tomato
anymore,
new
algorithms
only
need
to
be
nice
to
pair
whatever
the
current
status
quo
is
so.
H
This
is
what
we
mean
by
future
proof
that
a
future-proof
threshold
would
only
require
taco,
be
nice
to
pair
whatever
the
current
status
quo
is
so
I
say
all
this
to
make
this
point,
that
traditional
notions
of
tcp
friendliness
are
just
not
future
proof
in
a
future
where
no
one
uses
reno
at
all.
We
should
not
be
relying
on
thresholds
bound
to
reno
or
any
other
particular
algorithm's
behavior.
H
H
So
when
we're
trying
to
show
deployability,
we
typically
run
experiments
where
we
have
pair
whatever
the
current
status
quo
is
versus
taco,
our
new
algorithm,
and
we
want
to
measure
performance.
So
if
we
care
about
something
like
throughput,
that
might
look
like
this
right,
so
we
have
payers
performance,
we
have
tacos
performance
and
fairness
would
say,
compare
these
two
bars,
so
you
want
to
say:
is
this
outcome
fair
or
not,
but
remember
because
of
status
quo
bias?
We
don't
actually
care
what
happens
to
taco's
performance
when
the
two
compete.
H
We
only
care
about
what
happens
to
pairs
performance
in
particular.
If
this
was
if
this
was
pairs
performance
alone,
we
care
about
how
pairs
performance
has
changed
now
that
it's
complete
when
it's
competing
with
taka,
so
comparing
this
red
and
green
bar.
H
H
So
we're
again
here
bae
is
trying
to
do
her
video
conference
and
let's
say
that
pear
is
able
to
use
all
of
the
available
bandwidth
and
gets
low
latency
when
pair
is
alone.
But
now
when
jay-z
comes
along
with
his
taco
flow,
beyonce's
throughput
goes
down
and
our
latency
goes
up.
So
here
we
would
say
that
jay's
connection
has
harmed
beyonce.
So
let's
be
more
sort
of
formal
about
our
definition
for
harm,
so
harm
is
measured
between
zero
and
one
and,
like
gains
fairness
index.
H
Sorry,
it's
measured
between
zero
and
one
like
jane's
fairness
index
where
zero
is
harmless
and
one
is
maximally
harmless
harmful.
So
our
harmful
function
takes
two
inputs.
The
first
is
x,
where
x
is
pair
solo
performance,
so
this
is
pairs
demand
and
it
takes
y
where
y
is
pairs
performance
when
it's
competing
with
top,
thus
to
compute
harm
for
more
is
better
metrics
like
throughput
harm,
is
x,
minus
y
over
x
and
for
less
is
better
metrics
like
latency
harm
is
y
minus
x
over
y.
H
So
in
this
example,
then
taco
causes
0.5
throughput
harm
to
pair
and
taco
causes
0.95
latency
harm
to
pair.
So
you
should
be
able
to
see
that
harm
is
multimetric
right.
I
can
compute
throughput
harm
latency
harm,
jitter
harm
loss,
weight
harm,
whatever
other
kind
of
metric.
I
want
it's
also
a
status
quo
bias,
so
the
two
numbers
in
our
harm
calculation
are
only
based
on
pairs
performance.
I
don't
care
what
payer's
performance
is
here,
and
it's
also
demand
aware.
H
So
for
this
we
need
a
harm-based
deployment
threshold
and
our
key
in
proposal
in
our
work
is
that
a
harm-based
threshold
should
be
the
following:
taco
should
not
harm
pair
much
more
than
pair
harms
itself,
so
that
is
any
new
cca
shouldn't
harm
the
status
quo
more
than
it
harms
itself.
So
what
do
we
mean
by
much
more
than
well
so
far,
we've
computed
the
harm
that
taco
does
to
pair,
and
now
we
want
to
compare
the
harm
taco.
H
So
in
the
paper,
we
discussed
three
possible
ways
to
compare
these,
but
today
I
just
want
to
discuss
one
possible
threshold
which
we
call
equivalent
bounded
harm
and
equivalent
bounded
harm.
Just
like
its
name
says
these
two
things
should
be
equal,
so
the
harm
taco
does.
The
pair
should
be
equal
to
the
harm
pair
does
to
itself.
H
So,
let's
go
back
to
our
harm
calculations
to
compute
this
so
before
this
is
with
the
harm
that
taco
did
to
pair,
and
now
we
want
to
add
in
what's
the
harm
that
pair
does
the
pair.
So
here
you
should
see
that
the
throughput
arm
is
the
same,
because
I
get
the
same
throughput
for
top
for
pair
when
pair
competes
with
taco
and
one
pair
competes
with
itself
so
under
equivalent
bounded
harm.
H
So
a
harm-based
threshold
is
practical.
We
can
already
see
that
pair
can
achieve
certain
performance
when
it's
competing
in
practice
with
pair.
So
it
should
also
be
possible
for
taco
in
practice
to
get
similar
performance
outcomes.
Certainly,
if
pair
can
do
it,
then
so
can
taco
and
it's
future
proof
right.
There's
no
baggage
here,
there's
no
tomato
here
that
I'm
trying
to
behave
like
the
only
thing
that
matters
is
what
is
the
status
quo
and
how
does
it
compete
with
itself?
H
Well,
it
certainly
meets
all
of
our
criteria,
as
we
previously
discussed,
while
alternative
thresholds
based
on
fairness
or
thcp
friendliness,
certainly
do
not
so
it's
better
than
what
we
had
before,
but
we
do
have
some
concerns
about
equivalent
bounded
harm.
So
let
me
show
you
a
little
example,
so
here
I'm
illustrating
an
issue
with
equivalent
harm
where,
let's
say,
when
pair
competes
with
pair
for
two
long
running
downloads,
one
flow
gets
seven
megabits
per
second
and
the
other
only
gets
three.
H
So
there's
significant
imbalance
here
when
pair
competes
with
itself
and
under
equivalent
bounded
harm.
Any
new
taco
algorithm
couldn't
improve
this
imbalance,
which
seems
problematic.
So
this
is
a
hotness
paper
and
it's
an
open
question.
What's
exactly
the
right
harm-based
threshold
and
in
the
paper
we
define
two
other
thresholds
that
allow
taco
to
take
a
little
bit
more
bandwidth.
H
We're
not
sure
what
exactly
is
the
right
threshold
and
there
are
other
open
questions
in
the
paper.
For
example,
when
you
run
performance
experiments,
there
are,
of
course
going
to
be
a
distribution
of
these
results,
so
should
we
care
about
the
average
worst
case
results
or
something
else
also,
we
can't
possibly
measure
the
performance
of
pair
versus
taco
in
every
possible
scenario.
H
As
we
add,
so
we
ask
what
are
the
right
workloads
and
networks
we
want
to
test
for
deployability,
and
even
if
we
have
a
threshold,
can
we
really
even
enforce
it?
So
while
we
haven't
exactly
settled
on
the
perfect
threshold,
here's
what
we
do
believe
fairness
is
wrong.
It's
not
working
as
a
practical
threshold,
and
thus
we
need
to
stop
making
excuses
for
why
our
new
algorithms
are
not
reading
an
unrealistic
goal
like
ferris
and
lastly,
reasoning
about
harm
is
the
right
way
forward
and
it's
going
to
give
algorithm
designers
a
much
more
realistic
goal.
H
So
that
concludes
this
talk.
Please
look
at
the
description
box
for
a
link
to
the
paper
and
if
you
have
any
questions,
please
shoot
me
an
email
or
ask
me
a
question
on
twitter
and
thanks
for
listening.
A
A
Surely
someone
must
do
well
we're
waiting
for
that
I
mean
you.
Brian
go
ahead.
A
Okay,
I
I
was
going
to
ask
about-
I
mean
you
know
you
mentioned
some
some
limitations,
some
other
open
issues.
I
was
just
going
to
ask
if
you've
done
any
more
work,
since
I
mean
that
this
is
a
paper
from
a
year
ago,
if
you've
done
any
more
work
to
address
those
open
issues
and
if
you'd
like
to
say
anything
about
that.
H
I
so
I
actually
so
we
have
a
project
where
we're
trying
to
actually
apply
like
you
use
a
harm-based
threshold
to
show
whether
or
not
applications
we
see
in
the
wild
are
fair,
and
so,
while
doing
that
work,
we've
like
tried
to
refine
what
we
mean
by
like
if
equivalent
bounded
harm
is
the
correct
thing
or
in
the
paper
there's
like
other
ones.
Symmetric
bounded
harm
worst
case
bounded
harm.
H
So
I
guess
my
work's
kind
of
focused
on
how
do
I
actually
use
this
home-based
threshold
to
say
something
in
practice?
Yeah,
that's
sort
of
great
thing
to
do.
Yeah.
I
Hi
brian
trammell,
I'm
not
going
to
ask
you
any
difficult
questions,
because
it's
like
what
it's
for
in
the
morning
there,
which
is
5
30.,
5,
30..
Okay,
that's
still
terrible.
Thank
you
very
much
for
this
talk.
Like
you
know,
looking
at
your
your
last
slides,
I
think
there's
been
like
wide
consensus
that
fairness
isn't
cutting
it
in
the
community
for
a
very
long
time,
but
that
just
went
into
complaints,
and
this
is
actually
sort
of
like
okay,
we're
gonna
address
the
complaints.
I
I
would
not
my
not
my
research
group,
but
I
would
invite
you
to
try
and
find
a
way
to
come
to
the
internet,
congestion
control
research
group
at
ietf
110,
and
I
would
hope
that
we
could
have
sort
of
a
sort
of
a
long,
open
discussion
about
how
to
actually
use
harm
as
a
metric
to
talk
about
measurement
of
some
of
the
work
that's
being
done
there.
I
I
think
that
that
you
know
that's
a
really
good
room
to
actually
sort
of
expand
the
applicability
of
this
metric,
I'm
I
I
need
to
think
about.
You
know
some
of
the
you
know
the
the
questions
about
setting
thresholds
here
and
exactly
how
you
would
take
this
dimensionless
metric
and
compare
it
to
other
sort
of
like
the
benefit
side,
because
now
we
have
the
cost
side
and
we
can.
We
can
talk
about
the
benefit
side
and
that
actually
kind
of
changes.
I
The
the
the
ability
to
reason
about
this
in
a
multi-dimensional
space,
which
is
super
exciting,
and
I
really
like
to
see
that
followed
up
on
in
iccrg
110,
which
will
be
in
march
online
in
a
european
time
zone,
so
still
not
great
for
pittsburgh,
but
much
less
bad
than
this
time
zone.
So
hopefully
we
can
make
that
happen.
I
look
forward
to
talking
to
you.
There.
J
Hi,
thank
you
for
this
talk.
I
I
I
would
put
this
at
my
favorite
talk
in
the
last
decade
at
least
wow
thank
you,
and
so
I
would
be
very,
very
interested
to
see
a
sort
of
example
walk
through,
and
I
think
we
have
a
couple
of
very
relevant
natural
experiments
from
recent
history.
One
of
them
being
the
you
know,
just
I.
I
would
love
to
see
like
how
this
is
quantified
as
the
harm
of
cubic
torino.
J
This
is
a
thing
that
actually
sort
of
happened
and
also
the
harm
of
reno
to
vegas,
for
example
like
if
we
think
about
you,
know,
sort
of
the
early
history
of
of
when
congestion
collapsed
was
the
urgent
matter
that
sort
of
drove
the
deployment
of
reno
and
vegas
happened
within
that
year,
but
kind
of
got.
J
You
know
we
we
know
from
some
of
the
early
vegas
papers
that
it
didn't
get
deployed
because
it
loses
so
badly
to
reno
right
so
like
in
a
sort
of
counter
factual
world,
where
we
had
managed
to
land
on
vegas,
which
is
better
for
everyone,
but
tried
to
deploy
reno
like
what
would
that
harm
have
looked
like,
because
you
know
having
having
worked
with
some
congestion
control.
J
You
know
in
in
practice
on
the
internet
like
dealing
with
the
loss-based
congestion
control
approach
as
trying
to
deploy
a
a
delay
based.
I
was
working
with
fast,
tcp
and
and
fast
for
some
years
before
we
were
acquired
and
that
yeah
it
would
be
nice
to
know
sort
of
like
we
felt
this
impact
constantly
and
having
a
quantifiable
value
for
like
how
bad
was
it
that
we
adopted
reno.
J
I
would
be
very
interested
to
to
see
that
analysis,
regardless
of
you
know
how
useful
it
is
going
forward,
but
just
sort
of
as
an
example
of
how
to
apply
this
research.
Thank
you.
A
F
And
that
was
a.
It
was
a
great
talk
and
everybody's
saying
it,
but
it
really
was
my
work.
These
days
is
on
quantum
networking
and
we're
doing
some
simulation
and
one
of
the
next
tasks.
One
of
our
one
of
our
big
tasks,
that's
at
hand,
is
actually
comparing
multiplexing
schemes
so
we're
looking
to
compare
time,
division,
multiplexing
versus
stochastic,
multiplexing
and
buffer
memory,
space
multiplexing,
and
things
like
that
can
can
your
measures
be
used
for
for
comparing
multiplexing
schemes
or
os
scheduling
schemes
as
well
as
well
as
congestion
control.
H
I
don't
see
why
not
I
mean
all
you
really
need
to
have
is
like
an
scenario
where
you
have
some
status
quo
and
you
want
to
compare.
Does
this
new
thing
harm
the
status
quo?
So
as
long
as
you
have.
F
That
then,
yeah
yeah
in
the
case
of
a
multiplex
thing,
it's
going
to
be
so.
Presumably
the
schemes
when
we're
comparing
things
like
tdm
versus
statistical
multiplexing,
it's
going
to
be
more
of
comparing
one
simulated
network
scenario
using
one
versus
a
network,
simulated,
the
same
network
simulated
using
the
other,
and
so
it
would
be
two
different
scenarios
rather
than
competing
in
the
same
network,
most
likely.
Oh.
F
H
B
Yeah
so
my
colleague
pete
hyston,
and
I
already
working
on
a
test
suite
to
help
with
our
congestion
control
development
process,
and
we
are
planning
to
incorporate
this
harm
metric
into
it.
Now
that
we
know
about
it
and
it
makes
a
heck
of
a
lot
of
sense.
B
A
Yeah
I
mean
there
are
a
bunch
of
you
know
a
bunch
of
people
building
test
harnesses.
It
would
be
great
if
there
was
some
way
of
integrating
these
ideas
into
those,
so
we
could
automate
some
of
these
metrics.
K
Hi,
I
wonder
if
you've
thought
of
the
situation
where,
let's,
let's
give
it
a
a
numerical
example
where
we
talk
in
terms
of
bytes
instead
of
bits
per
second,
so
let's
say
a
100
gigabyte
flow
starts
at
the
same
time
as
a
say:
a
10
megabyte
flow
and
then
10
seconds
later.
Another
10
megabyte
flow
starts
and
then
10
seconds
later,
another
10
megabyte
flow
starts
and
so
on.
K
So
you've
only
got
two
of
the
flows
running
at
any
one
time,
but
you've
got
very
you
know
reasonably
small,
but
you
know
still
large
10
megabyte
flows
running
against
this
much
longer
flow,
so
overall
all
the
10
megabyte
flows.
K
If,
if
they
all
got
equal
bit
rate
sequentially,
then
by
your
heart
metric,
they
would
be
perfectly
fair.
You
know,
or
there'd
be
equal
amounts
of
harm,
but
if
the
larger
flow
that
goes
on
longer
and
hits
all
the
other
flows
were
to
go
slower
every
time.
One
of
the
other
flows
was
there
and
the
other
flows
that
were
shorter
were
to
go
faster.
H
Okay,
so
my
my
definition
for
harm
and
everything
says
you
can
use
whatever
metric
you
want,
it
doesn't
have
to
be.
K
C
K
Yeah
and
and
the
problem
is,
people
are
measuring,
measuring
this
with
bits
per.
Second,
that's
the
problem
in
the
in
the
forum
wherein
the
ietf
that
people
don't
realize
that
there's
a
there's
another
dimension
here,
but
it's
time
I'm
not
saying
you
know
it's
not
useful
to
have
your
harm
metric.
What
I'm
saying
is
that
we
also
have
to
be
careful
which
actual
units
we
use
to
put
into
the
dimensionless
equation.
A
K
K
That's
the
problem
anyway,
yeah
it
has
to
be.
K
About
this,
but
just
just
to
explain
to
people
it's
not
just
equal
bit
rate
or
you
know
it's
not
even
equal
equal
harm.
If
you're
measuring
bit
rate.
B
So
I
think
that
will
need
to
be
incorporated
into
various
test
suites
so
that
these
different
metrics
of
harm
are
automatically
generated,
so
they
can
be
evaluated
all
at
once.
K
K
K
This
is
this
is
why
it's
important
to
measure
the
congestion
that
they
all
cause
and
and
use
that
as
the
metric.
A
K
This
this
requires
a
deeper
conversation
than
we
can
have
here,
but
yeah.
Thank
you.
A
Very
much
sounds
like
we
should
move
this
to
to
the
iccrg
list
or
somewhere.
A
So
I
think,
I
think
the
metrics
that
the
hum
metric
can
cover
those
it
just
needs
to
be
applied
correctly.
Sylvester.
L
Hello,
thank
you
for
the
excellent
paper
and
and
talk.
I
I
really
like
your
paper
last
year
when
I
I
read
it
and
still
did
you
do
any
kind
of
comparison,
for
example,
cubic
and
and
bbr
or
any
two
position
controls,
because
I
I
still
have
feeling
that
the
harmbest
bar
is
a
much
better
buy
than,
for
example,
james
furnace,
but
it's
it's
still
close
to
impossible
to
meet
and
innovate.
At
the
same
time,.
L
Do
you
test
your
your
home-based
threshold
with
actual
condition
controls
over
the
internet
and
did
you
find
that
that,
for
example,
a
new
like
cubic
is
is
really
fair
to
reno
or
not,
or
whether
bbr
is
fair
to
cubic
or
not,
because
I
still
have
the
feeling
that
for
every
new
congestion,
control
and
old
condition
control
power,
you
can
easily
find
the
scenarios
when
this
bar
is
not
met.
H
Yeah,
so
we're
actually
working
on
that
now
and
yeah
it
depends
well,
we
found
it
depends
on
the
application.
So
we've
looked
at
things
like
just
long
running
downloads,
but
also
video
and
compared.
H
H
A
Excellent
jonah
you
get
the
last.
H
M
Thank
you
for
presenting
this
work
here.
I
have
long
wanted
this
work
to
show
up
in
one
of
the
irtf
places,
and
I'm
glad
that
we
should
showed
up
here.
So
I
want
to
see
if
I
can
try
and
articulate
what
bob
was
trying
to
say,
because
I
think
there's
what
bob
was
trying
to
say
is
actually
something.
That's
that's
actually
quite
important.
M
On
the
one
hand,
I
think
that
what
what
work
is
doing
is
challenging
people
to
move
away
from
notions
of
fairness,
notions
of
thinking
about
flow
fairness,
flow
level
or
bitrate
level.
Fairness.
On
the
other
hand,
there
is
a
risk
and
the
risk
is
that
now
people
get
attached
to
what
I
would
think
of
as
a
similarly
constraining
or
ossifying
notion
of
flow
level
harm.
M
So
if
one
simply
thinks
of
bit
rates,
then
you
effectively
end
up
with
the
dual
of
the
the
previous
problem
right,
you're,
simply
thinking
about
harm
now,
but
in
a
very
sort
of
micro.
M
Benchmarky
way
still
so,
the
fact
that
you
moved
away
from
fairness
is
helpful,
but
the
problem
is
that
one
gets
still
stuck
to
being
with
micro
benchmarks,
but
I
know
that
that's
not
what
you're
thinking,
but
I
think
that
there's
a
risk
in
simply
having
a
metric
and
thinking
that,
now
that
we
have
this,
this
is
the
one
true
way
of
measuring
condition,
control,
usefulness.
M
M
So,
ultimately,
what
matters
is
the
metric
that
applications
use
to
determine
what
usefulness
or
useful
work
is
for
them,
and
that
tends
to
be
quite
difficult
and
it
tends
to
be
moving
by
giving
this
the
today
youtube
has
a
certain
way
of
doing
things
and
therefore
certain
metrics
become
important
tomorrow.
It's
something
else.
M
It's
gaming
or
it's
I
don't
know
it's
twitter
or
it's
tick,
tock
or
whatever
it
is,
and
they
have
different
metrics
and
as
network
use
changes,
we
will
find
that
there
are
metrics
that
get
more
or
less
harmed
for
the
same
set
of
congestion
controllers
and
that's
something
that
we
have
to
keep
in
mind,
and
I
think
that
we
want
to
be
mindful
of
not
we.
We
want
to
be
mindful
that
we
can't
find
a
silver
bullet
here.
M
There
isn't
one
really,
but
there
is
a
process
that
one
can
think
about
how
to
evaluate
a
congestion
controller
in
a
better
than
we've
done
in
the
past,
for
a
set
of
circumstances
that
happen
to
be
true
today,
and
we
might
want
to
evaluate
this
in
the
future
because
as
circumstances
change
as
application
workloads
change,
the
notion
of
harm
from
an
application's
point
of
view
will
change,
and
I
think
that's
important
to
keep
in
mind
as
you
as
you
work
on
this.
I
just
encourage
you
to
think
about
it.
That
way
as
well.
A
Okay,
thank
you.
Everybody.
Thank
you
again
to
to
all
the
speakers.
Congratulations
to
all
the
prize
winners.
I
think
this
has
been
an
absolutely
great.
So
thank
you
and
congratulations
again,
I'd
like
to
remind
everyone
that
the
deadline
for
next
year's
nominations
for
next
year's
anrp
is
this
sunday.
A
So
if,
if
you
want
to
get
talks
this
good
next
year,
please
nominate
some
good
work
and
go
to
the
iitf.org,
a
rp
site
and
nominate
the
work
and
with
that
that's
everything
we
have
for
today.
Thank
you,
everybody
and
I
hope
to
see
you
in
person
eventually
but
online
in
march.