►
From YouTube: ANRW-TimeFairnessAndNeighbors
Description
TIMEFAIRNESSANDNEIGHBORS meeting session at ANRW
A
B
So
this
is
better
hello,
everybody,
I'm
Thomas,
Schaffer
I'm,
here
from
Germany,
Berlin
and
I'm,
going
to
present
you
some
student
work,
which
we
had
done
in
collaboration
with
the
University
of
Potsdam
and
what
we
were
looking
at
was:
how
can
we
secure
a
different
way
to
secure
neighbor
discovery
in
access
networks?
Then
what
had
been
proposed
previously
and
what
is
being
implemented
or
proposed
in
several
other
aspects?
B
B
Well,
actually
many
of
the
comments
I
got
from
the
reviews
actually
triggered
me
to
go
a
little
bit
back
in
history
and
and
talk
about
what
paper
discovery
and
ICMP
issue
is
actually
aren't
how
they
can
be
exported.
I
know
I'll
focus
on
what
we
did
so
IP
v6
is
kind
of
still
the
new
protocol
20
years
old
now
and
ICMP
is
one
of
the
big
change
changes
in
protocol
in
regards
to
what
had
been
there
in
ipv4
and
which
is
now
May
different
from
what
we
had
before.
B
B
So
all
these
changes
actually
led
to
a
very
new
protocol,
fairly
different
protocol
and
what
we
had
in
ipv4-
and
this
actually
sort
of
now
opens
up
several
possibilities
to
actually
exploit
these
kind
of
things
and
the
maybe
the
most
obvious
change
was
a
PDA
address,
auto-configuration,
which
changed
the
way
IP
addresses
are
assigned
to
us.
I
mean
we
now
have
DHCP
in
all
the
networks
so
made
for
the
end-user
may
not
feel
so
different
one
from
the
end
user
perspective,
but
at
least
the
protocols
are
very,
very
different.
B
So
how
does
it
work?
So,
basically,
what
you
do
is
you
have
a
period.
Six
addresses
now
have
something
called
a
lifetime
which
may
surprise
people
that
it.
You
have
actually
now
sort
of
tentative
addresses
and
preferred,
addresses
and
expired
addresses,
and
when
you
do
generate
a
new
address,
you
actually
set
it
and
do
to
prove
it
in
the
in
the
state
of
being
tentative,
and
you
need
to
make
sure
through
the
process
of
duplicated
address
detection.
It
is
address
it's
actually
being
unique
on
a
networking.
You
can
actually
use
it.
B
So
that's
how
it
should
work.
That's
how
most
most
of
the
times
you
what
you
see
in
the
network,
but
it's
actually
now
possible
to
to
attack
these
kind
of
protocol.
So
it's
very
very
easy
to
do
this
so
and
you
can
actually
dust
somebody
just
by
replying
to
these
kind
of
messages
on
the
network
and
other
attacks
are
possible
as
well.
So
how
does
this
thing
actually
work?
So
if
somebody
just
sort
of
some
prankster
or
some
some
attacker,
actually
sort
of
just
monitors?
B
What's
going
on,
what's
being
sent
out
to
the
network
and
and
just
replies
to
this,
and
and
what
happens
then,
is
that
your
host
says?
Oh
sorry,
I
just
got
an
answer
back
I
can't
use
his
address
and
it
may
pick
another
one
depending
on
the
implementation.
But
then
again
you
do
the
same
thing
again
and
again,
and
you
actually
does
the
the
host
from
actually
being
active
on
under
network.
B
So
what
did
people
think
about
it?
So
they
said
oh
well,
we
have
this
nice
thing
called
IPSec,
so
we
just
secured
our
neighbor
exhale
messages
or
neighbor
protocol
messages,
basic
and
then
solve
the
problem
solved.
The
trouble
is
this
doesn't
really
work
because
in
order
to
set
up
an
security
Association,
you
somehow
have
to
have
a
valid
IP
address,
which
you
don't
have,
because
you
are
in
the
process
of
configuring
one.
B
So
this
would
actually
require
that
you
have
previously
configured
manual
security
Association
on
your
devices
in
order
to
join
the
network
so
kind
of
a
chicken
and
egg
problem
here,
then
people
real
assets,
we
need
something
else
and
they
secure
neighbor
discovery
protocol,
which
is
a
nice
idea.
I
like
it
very
much.
Unfortunately,
nobody
sort
of
either
implemented
it
or
even
if
they
implemented
it.
We've
found
that
there
are
some
potential
vulnerabilities
against
it.
Just
by
sending
or
arbitrary
messages
you
can
actually
cause
and
release
also
resource,
exhaustion,
attack
and
so
I
think.
B
That's
probably
the
point
where
people
set
the
odds.
Dump
all
over
then
anymore,
and
we
don't
implement
it.
So
how
do
we
actually
be,
as
our
research
group
actually
cut?
You
cut
you
this
be
sort
of
monitored
all
these
activities,
we're
very
closely
in
a
timeframe
from
15
years
ago,
or
something
like
this
and
we
actually
found
and
that's
I,
don't
know
if
it's
known
by
everybody,
there's
there's
actually
good
X
attack
toolkit
out
there.
B
So
if
you
ever
wanted
to
play
this
yourself
and
find
out
what
what's
possible
on
on
these
these
protocols
and
what
what
kind
of
attacks
out
there
just
actually
did
seek
to
occurred
here,
so
it's
actually
being
developed
by
a
guy
in
Berlin
as
well.
So
I
met
this
guy
and
he's
a
very,
very
nice
guy,
but
sort
of
very
interested
in
how
to
break
ipv6.
B
People
came
up
with
all
kinds
of
changes
to
the
protocols
we
now
implemented,
something
like
route
or
advertisement
guards
and
silicon
on
your
access,
switches,
etc.
So
kind
of
the
problem
sent
seemed
to
come
or
gone
out
of
focus
and,
and
nobody
really
cares
anymore.
What
we
did
is
we
in
a
couple
years
ago
we
started
looking
at
ipv6
support
in
IDs
and
specifically
in
snort
and
the
developers
they
said.
B
So
we
did
some
implementations
there
to
allow
this
distinction,
and
so
we
looked
at
some
of
this
in
order
to
detect
any
P
attacks
in
IDS's,
but
then
found
it's
maybe
one
way
to
do
it,
but
there's
may
be
others
and
instead
of
using
IDs,
we
could
actually
use
an
SDN
switch
to
do
very
much
of
the
same
thing.
So
we
could
monitor
or
links
and
monitor,
what's
going
on
and
linked
and
potentially
DSS
also
cheaper
to
deploy
and
maintain.
B
But
once
we
do
this,
we
can
actually
go
a
little
bit
further
and
say:
oh
well.
Now
we
can
also
filter
not
only
monitor
or
you
can
also
put
stuff
on
the
network
on
the
Sdn
controller,
which
allows
us
to
give
more
control
over.
What's
actually
going
on
in
the
network,
so
we
build
NDP
proxy
or
an
intelligent,
selective
proxy.
It's
under
discriminative
lis
proxying
messages,
it's
just
as
if
there's
a
certain
data
metal
net.
B
We
implement
this
in
the
Rio
framework
in
piping
and
what
we
do
is
we
monitor
the
network
and
whenever
we
see
a
duplicate
address
detection
message,
then
we
take
this
as
the
authentication
hook
for
our
data
model.
So
this
generates
a
host
cache
entry
in
our
controller,
and
this
allows
us
then
to
sort
of
distinguish
between
what
we
should
forward
or
drop
on
on
our
networks
other
than
this.
This
is
normal,
oh
no
scripts
which
for
ipv4-
and
we
additionally
also
implemented
through
our
advertisement
cards.
B
So
actually
our
controller
is
the
one
who's
who
is
sending
out
our
attachments
and
if
somebody
else--
us
that
we
just
blocked
this,
what
we
also
did
is
we
didn't
want
to
have
every
kind
of
traffic
or
NDP
traffic
hitting
the
controller,
because
this
would
potentially
cause
some
performance
issues.
So
we
managed
this
by
using
different
priorities
for
our
flow
rules.
So
we
have
a
catch-all
rule
which
has
the
lowest
priority.
If
somebody
is
known
mecca's
now-
and
we
have
these
flows
to
take
a
priority
over
them,
then
we
filter
ipv6
messages.
B
B
B
We
need
it
to
only
have
a
single
thread
trying,
so
we
need
to
make
this
so
that
it's
actually
something
of
this
coin
going
to
be
called
in
parallel.
So
how
do
we
learn?
Learn
the
host
on
the
network,
so
we,
as
I
said
we
use
the
typical
recitation
Massachusetts
whenever
we
see
some
body
using
a
and
specified
source
addressed,
and
we
take
this
as
a
hint.
It
might
be
a
new
host
on
a
network
and
whenever
we
see
a
second
one,
then
we
block
this.
B
This
could
lead
potentially
to
the
case
that
if
somebody
is
moving
from
one
port
to
the
other,
with
his
machine
that
he
himself
blocking
himself,
but
we
we
just
take
this
chance
and
sort
of
in
after
a
certain
computer
world
time
period,
we
actually
drop
these
entries
from
the
cache,
so
it
shouldn't
be
blocked
forever.
It
shouldn't
be
just
blocked
for
a
certain
specific
time
period.
B
B
So
when
regular
traffic
should
be
actually
also
being
able
to
cause
these.
These
flow
entries-
and
the
idea
is
that
when
we
have
some
bio
direction
or
communication
going
on,
we
actually
have
these
going
over
the
flows
and
we
implement
it
as
such,
that
sort
of
whenever
there's
a
new
packet
being
triggered,
and
we
have
a
timeout
period
for
them
for
the
flow
rules
and
they
should
be
stay
active
as
suit
as
long
as
there
is
active
communication
between
the
networks
between
the
entities
on
a
network.
B
B
So
the
basic
functionality
of
onboarding
ipv6
horse
is
quite
quite
easy
to
implement
and
and
works
as
intended.
The
main
challenge
really
is
in
maintaining
this
host
cache,
because
we
don't.
We
can't
guarantee
that
we
see
all
all
the
packets
going
on
in
the
network
and
we
don't
can
I
mean
the
trying
to
keep
it
is
if
you
have
some
host,
maybe
shutting
down
or
maybe
going
to
sleep
and
not
answering
our
pro
packets
when
we
trying
to
figure
out
if
this
hose
is
still
active
on
the
network.
B
We
also
did
test
some
performance
issues
with
this
because
sort
of,
if
basically
now
we
are
in
the
packet
path
and
and
whenever
we
need
to
process
something
then
potentially
take
some
time
and
we
find
out
did
it
take
some
time,
which
is
mainly
due
to
the
fact
that
we
have
a
non
optimized
host
cache
data
structure.
The
trouble
here
is
that
been
in
the
norm
of
economics,
which
can
just
sort
of
key
on
the
mac
address.
B
Here
we
now
have
to
check
megatron's
against
different
and
potentially
multiple
ipv6
addresses,
so
we
have
to
go
through
the
whole
cache
and
this
is
done
in
tight,
and
so
it's
not
yet
production
ready,
but
sort
of
that's,
and
but
these
times
are
only
true
or
these,
these
these
these
times
here
in
milliseconds
or
for
just
first
ICMP
packet
being
processed
by
the
switch
whatever
we
got.
They
got
these
things,
it's
the
flows
installed
and
all
subsequent
like
it
will
be
much
more
faster.
B
What
sort
of
main
challenge
already
hinted
is
really
what
happens
if
a
horse
is
have
an
idea
sleeping
if
or
if
house
may
change
their
mega
trusses
what
they
are
doing
working
out?
So
that's
actually
something.
What
we
we
found
is
that
sort
of
it's
actually
is.
You
feel
it's
a
very
good
idea
to
use
as
the
end
to
do
this
kind
of
things.
The
trouble
is
that
sort
of
as
Ethernet
and
IP
are
not
very,
very
well
integrated
with
each
other.
C
B
D
B
It's
a
it's.
The
target
is
wired
Network,
so
yeah.
If
you're
on
Wi-Fi,
like
no
I
am
we
were
actually
actively
targeting
wild
wild
networks
because
I
feel
like
that's,
that's
like
neglected
area,
so
nobody
really
does
does
so
for
excess
networks.
I,
don't
see
anybody
doing
it.
I
mean
we
see
a
lot
of
activities.
You
see
like
a
lot
of
activity
knee
in
Wi-Fi
space,
but
for
for
just
ordinary
switch
Ethernet
networks.
It's
everybody
assumes
it's
just
there
and
you're
right
I
mean
it's
a
there's.
B
A
whole
lot
of
assumption
here
and
the
question
is
really:
should
we
maybe
change
some
of
the
assumptions
because
sort
of-
and
it
is
transparent,
switching
approach
for
Ethernet
is
head.
I
mean
has
its
merits,
it's
working,
but
it's
also
introducing
a
lot
of
troubles
which
may
or
may
not
have
if
he
had
a
much
better
integration
and
finding
out
who
we
actually
aren't
and
yeah
and
basically
using
git
dodges
a
basically
just.
A
It's
yes,
maybe
chat
more
on
the
break.
Thanks
for
your
talk
so
up
next,
we
have
Sylvester,
not
us
from
Ericsson
research.
He
says
he's
working
with
traffic
man
for
twenty
years
and
he's
interested
in
finding
practical
scenarios
where
fast
and
precise
control
of
resource
sharing
is
needed
among
a
high
number
of
flows,
so
I
think
if
you've
got
practical
problems
like
that,
perhaps
find
him
in
the
break.
Let's.
E
Take
it
this
time,
silver,
sir,
presenting
our
paper
to
both
score
statements,
fairness
and
multiple
time
scales
paper.
Together
with
my
university
colleagues,
you
can
find
all
our
papers
on
our
home
page,
so
our
goal
is
to
extend
fairness
to
multiple
time
scales.
So,
first
of
all,
we
want
to
define
some
kind
of
mood:
temperature
fairness.
We
want
to
be
all
done
existing
frameworks,
namely
two
frameworks.
One
is
the
per
packet
value
base
cover
stateless
member.
E
The
other
is
another
paper
of
mine,
not
with
the
same
also
as
it
is
called
moody
time
scale
bandit
profile,
and
we
also
want
to
provide
efficient
and
versatile
in
implementation
and
versatile,
meaning
that
we
want
to
provide
fine
fine
gain
fairness
on
multiple
time
scales
that
is
independent
of
traffic
mixes
and
resource
bandwidth
and
I
also
want
to
demonstrate
some
advantages
and
some
motivation
for
this.
As
an
example,
motivation
is
T
is
from
a
paper.
E
Sorry
so
overview,
of
course,
stateless
resource
sharing,
with
the
example
of
per
packet
value
base
course
agency.
So
sharing
its
it's
a
framework
which
allows
a
wide
variety
of
detailed
and
flexible
policies.
I
will
I
will
go
to
some
of
the
details
of
these
policies
and
enforces
the
policies
for
all
traffic
mixes
and
scales
with
the
number
of
flows.
E
There
are
two
components:
packet
marking
and
the
edge
and
resource
node
everywhere
else,
mainly
in
the
core,
which
does
a
command
scheduling,
so
packet
marking
at
the
edge
and
codes,
the
policies
into
value
marked
on
each
and
every
packet
and
there's
a
packet
marker
per
flow
flow
being
the
unit
where
we
have
resource
sharing
policies
and
based
on
that
packet
marking
the
resource
node
doesn't
know
about
policies,
flows,
it
doesn't
have
to
have
separate
keys.
It
only
does
its
behavior
based
on
packet
marking
and
packet
marking
only,
and
we
have
very
fast
and
simple
implementations.
E
So
betrayed
measurements
and
timescales,
botrytis
derived
measure.
There
are
all
these
great
packet
arrivals
and
these
can
be
translated
to
bitrate
and
it
which
way
it
always
has
a
timescale
associated.
So
it's
a
some
kind
of
volume
divided
by
some
cut
some
time,
some
kind
of
time,
and
there
are
some
natural
timescales
like
OTT
one
second
session
duration
may
be
one
minute.
Ten
minutes
and,
for
example,
amounts
is
mostly.
Cap
is
also
a
kind
of
like
a
bit
rate
limit
over
the
month.
E
So
you
can
see
that
some
examples
for
the
packet
rival
says
if,
if
in
a
normal
TT
more
packet
arrives
and
the
RTT
bit
rate
will
be
larger
or
there
can
be
a
session
bit
rate,
there
can
be
bit
rates
averaged
over
pages
which
are
larger
than
a
session,
and
then
then
these
betrays
will
be
smaller
than
the
session.
That's
right.
So
why
does
that
make
sense?
E
That
makes
sense,
because
we
can
have
fairness
and
multiple
times
here.
So
then
do
we
measure
a
bit
rate?
We
can
measure
betrayed
only
when
the
source
is
active
to
describe,
for
example,
the
performance
of
the
source,
but
we
can
also
measure
betrayed
during
both
active
and
inactive
period
to
judge
fairness
of
resource
sharing.
So
what
can
be
a
fan
as
goal?
E
A
multiple
time
scale
is
to
to
balance
the
the
bitrate
measurements
among
these
multiple
time
scales
and,
for
example,
to
allow
higher
share
on
shorter
time
scales
for
flows
below
their
fair
share
or
longer
time
scales.
So
there's
an
example
there.
If
we
have
the
same
fairness
on
each
and
every
time
so
one
to
one
sharing,
then
there
is
the
blue
flow
and
blue
flow
is,
is
having
the
same
same
resource
all
right
like
the
yellow,
and
it
takes
quite
amount
of
time
to
download
that
loser.
E
But
if
you
take
into
account
that
on
a
longer
time
scale
the
blue
food
didn't
have
any
transmission,
you
can
allow
a
higher
share
on
the
shortest
time
scale.
So
he
said
you
cannot.
Oh
this
one
to
five
share
and
steal.
On
the
longer
time
scale,
the
resource
sharing
remains
one-to-one
and
assuming
that
the
yellow
user
is
downloading
alone.
So
the
yellow
users
download
time
won't
change.
E
Why
why
the
blue
users
stupid
will
be
improved
quite
much
and
there's
there
is
one
more
thing
is
this:
this
is
this
was
true
for
our
previous
paper,
but
we
also
aim
at
smooth
transition
at
at
relations
between
the
rates
measured
at
different
time
scale.
So
we
don't
want
to
have
sudden
rate
changes
when
the
relation
between
rates
changes,
I
will
go
to
more
more
more
the
details
later
there
are
much
more
novelties
compared
to
the
previous
table,
but
first
a
short
introduction
to
be
a
backpack.
E
If
you
are
you
marking
like
versatility,
so
we
define
in
the
per
pocket
value
framework,
we
define
policies,
throughput
value
functions,
and
these
are
for
a
single
time
scale.
They
provide
a
very
fine-grained
control,
which
is
independent
of
traffic
mix
and
resource
bandwidth.
So
these
these
meet
our
requirements
for
a
single
time
scale,
and
if
you
have
a
traffic
mix,
if
you
have
a
resource
bandwidth,
then
that
will
always
result
in
something
we
call
condition.
Threshold
value
and
condition.
Threshold
value
is
a
horizontal
line
on
this
value
function.
E
E
There
are
several
traffic
classes
here,
but
the
interesting
ones
are
the
gold
in
the
silver,
so
in
in
case
of
high
congestion,
you'd
like
out
such
to
get
wise
this
route
of
silver
flows
in
case
of
low
congestion,
we
would
like
to
have
God
flows
get
four
times
the
super
soap
and,
and
what
is
in-between
is
medium
congestion
when
silver
flies
over
the
reach
ten
miles
per
second,
then
those
flaws
can't
just
can't
get
can't
just
get
the
fall
times,
but
it
gets
the
rest.
So
it
gets
something
between
two
times
and
four
times.
E
So
the
rationale
behind
is
to
have
some
kind
of,
for
example,
mini
movers
you
put
Target
and
Auntie
aluminum,
so
you
could
put
target
is
met
for
the
lower
priority
flow.
We
shouldn't
prioritize
the
higher
priority
flow
that
much-
and
this
is
again
an
illustration
of
the
Versa
type
policies
we
can
have
with
this
framework
and
and
again
we
just
set
based
on
these
curves.
We
redefine
packet
marking.
So
how
can
we
have
packet
barking
based
on
a
secret
value
curve?
We
have
incoming
packets,
we
measure
the
rate
somehow.
E
So
these
are
the
rate
based
packet
marking
policies,
so
we
we
measure,
for
example,
rate
48,
and
then
we
determine
a
uniform
random
rate
between
0
and
that
rate,
take
the
value
of
the
stupid
value
function
at
that
point
and
mark
that
packet
value
on
the
packet
and
just
by
having
that
marking
and
maximizing
the
transmitted
packet
value
in
the
core.
We've
realized
a
resource
sharing
target
sum
on
this,
so
there
are
other
papers
about
how
this,
how
this
happens,
but
it's
important
to
understand
this.
E
This
kind
of
thing
for
understanding
this
paper
so
but
for
this
video
single
rate
measurement
a
single
time
scale.
But
if
you
want
to
have
multiple
time
scales,
we
need
read
to
measure
bit
rates
and
multiple
time
scales
and
we
actually
introduce
two
different
rate
measurement.
Algorithms.
We
have
token
bucket
base
rate
measurement
organ
for
the
RTT
time
scale
only
because
we
think
that
the
token
bucket
based
rate
measurement
models,
the
first
reboot
and
our
fair
share
of
the
lot-
are
not
quite
well.
It's
a
single
part.
Token
bucket.
E
The
the
bit
rate
on
the
smallest
time
scale
is
always
larger
than
the
bit
rate
on
the
next
time
scale
and
sense.
You
can
see
an
example.
There
are
like
time,
scale
ones
like
five
subtank
that
and
first
the
the
once
one
second
rate
measurement
reaches
the.
Since
we
put
down
the
five
second
and
10
seconds
and
and
the
slicer
version
and
the
transmission
stops
so
for
new
clothes
arriving
to
the
system,
they
will
have
pretty
big
bit
rate
measurements
or
no
time
scales,
but
they
will
still
have
moderate
rate
measurements
along
with
time
scales.
E
So
how
can
we
then
define
throughput
on
multiple
time
scales?
Instead
of
having
a
single
supa
trial
function
per
flow
type,
we
introduce
once
we
put
value
function
per
time,
scale
per
flow
type,
so
for
a
single
kind
of
flow
instead
of
having
one
such
function,
you're
having
for
such
functions
and
all
of
these
have
a
time
scale
associated
so
so
what
does
it
mean?
We
first
defined
dominant
time
scale
so
dominant
time.
Scale
is
then,
for
example,
time
scale.
E
R
is
the
dominant
when
the
rate
measurement
at
that
time
scale
is
the
largest,
and
if
there
are
more
than
one
than
the
longest
time,
scale
applies
and
as
an
example
have
two
fluids
of
the
same
flow
type.
One
has
dominant
time
scale,
one
the
other
tournaments
go
forward,
so
one
has
just
arrived.
The
other
has
a
long
history
in
the
system,
and
we
say
that
multi
time
scale,
research
airing
is
such
that
they
should
share
the
bottleneck
according
to
TV
f4
and
TV
as
one.
E
So
it
is
as
if
they
would
be
different
flow
flow
types
in
the
single
time
scale
framework.
So
if
you,
if
you
look
at
it,
then
the
the
new
flow
is
the
is
the
blue
one.
It
should
have
much
higher,
so
you
could
share
and
the
old
old
flow
the
green
one,
but
we
also
aim
smooth
transitions
when
the
relation
between
our
eyes
change.
So
if
there
is
an
oscillation
between
which
is
the
dominant
time
scale,
we
don't
want
to
have
an
oscillation
in
the
resource
sharing.
E
So
how
do
you
achieve
it?
There
is
something
in
our
previous
paper
called
mu
times,
given
the
profile,
and
it
has
a
few
go
presidencies
and
the
future
chemicals
associated
with
each
of
these,
and
this
can
market,
given
the
preference.
If
there
are
enough
tokens
at
all
of
these
buckets
and
in
theory,
we
could
quantize
smooth
times
get
through
it,
value
function
to
an
MTS,
be
VP,
but
is
not
practical,
because
we
would
have
thousands
of
token
buckets,
but
we
can
use
that
to
approximate
what
happens
we
can?
E
We
can
look
at
multi
times,
get
bandwidth
profile
as
an
indirect
measure.
Mental
operation
rate
on
each
time
scale
and
limiting
token
buckets
would
would
determine
the
measurement
again.
I
can't
go
through
all
of
the
details
there.
But
if
you
look
at
the
look
at
what
happens
is
that
the
transition
between
the
street
value
function
will
happen
at
the
rate
measurements
at
the
different
time
scales
and
by
using
that
we
can
have
an
efficient
packet
marking
based
on
these
repeat
value
functions.
E
So
you
can
measure
bit
rate
for
all
of
the
time
scales
at
these
bit
rates.
You
can
determine
the
distance
between
the
street
value
function
and
that
will
result
the
blue
region
of
difference.
We
could
write
functions
and
similarly
to
the
single
x
kilogram,
you
can
have
a
uniform
random
number
determine
the
right
region
for
for
determining
your
packet
value.
So
you
can
choose
between
region,
1,
2,
3,
&,
4,
based
on
relations
between
your
random
number
and
the
measured
bit
rates
and
determine
your
your
packet
value
accordingly,
and
it's
actually
a
pretty
fast
algorithm.
E
So
what
can
be
achieved
with
this?
We
did
some
simulations
to
understand
what
happens
in
the
system.
It's
a
pretty
complex
system.
Can
we
achieve
some
kind
of
gains
can
be
indeed
boost,
new
flares
things
like
that,
so
we
use
an
s3.
We
use
the
same
core
scheduler
like
in
our
previous
organ,
with
a
millisecond
delay
target.
We
had
several
flows.
One
flow
consists
of
either
a
single
DC
TCP
connection
or
for
for
cubic
TCP
connect
on
Josh
connections,
which
means
faster,
slow
start
for
for
the
cubic
flowers.
We
have
thermally.
E
Second
one,
second,
five,
second
and
ten
seconds,
this
time,
skills
and
tvf
for
was
gold
and
silver,
as
in
the
single
timescale,
t
VF
and
in
each
and
every
time
scale.
We
we
double
the
share
so
and
timescale:1.
We
should
have
twice
the
share
five
time
scale:
five,
we
should
have
doubled
the
share
time
scale,
one
four
times
the
share
and
time
scale
that
in
a
second
eight
times
the
share
of
the
long
time
ten
second
average
stupid.
E
E
Ppv
and
the
right
hand.
Side
curve
is
the
reference.
So
in
the
reference
you
can
see
that
with
the
old
framework
we
knew
it
converges
to
the
fair
share,
pretty
fast.
We,
we
have
one
second
average
throughput
here,
so
it's
it's
basically
converges
within
one
second.
But
what
happens
with
the
multi
time
scale
is
that
when
the
when
a
new
flow
arrives,
it
is
boosted
first,
then
there
are
some
oscillations
and
then
they
converge
to
the
fair
share,
and
how
can
you?
E
How
can
we
visualize
it
or
understand
it
me
even
more
is
to
is
to
measure
two
things.
One
is
the
one
is
we
call
the
flow
time
averages
and
there
is
the
five
five
second
time
being
the
average,
so
the
flow
time
average
is
the
total
amount
of
buys
the
flow
has
downloaded
divided
by
the
total
amount
of
time
the
flow
has
spent
in
the
system.
E
Why
the
fisa
five
second
time
window
average
is
basically
looking
back
back
back
at
the
last
five
seconds
and
and
see
what
was
the
average
stupider,
including
periods
when
the
flavor
wasn't
even
there,
and
what
you
can
see.
I
highlighted
some
cases
with
the
cubic
multi
timescale
vs.
there
are
friends.
So
what
you
can
see
there
is
that
the
flow
time
averaged
one.
This
is
a
relative
curve
and
one
is
the
is
the
equal
share?
E
It's
it's
not
any
more
than
the
furture,
so
the
flow
time
averaged
has
a
high
boost
at
the
beginning
and
actually
the
five
second
average
fair
share
of
the
flow,
which
is
almost
the
the
equal
share
in
one
second,
which
means
that
in
one
second,
the
flow
can
download
as
as
much
data
as
his
equal
share
in
the
last
second,
including
the
four
seconds
when
he
wasn't
yet
there.
But
after
a
period
you
can
say
see
that
it
will
go
to
the
equal
sharing
site,
it
will
converge
to
equal
sharing
with
flows.
E
I
will
show
show
this
slide
first.
So
in
this
example,
there
are
continuous
arrivals
to
tiny
flows,
arrive
to
the
system,
every
10
second,
and
you
can
see
that
the
number
of
flows
is,
of
course,
increases
say
the
the
per
flow
equal
bandwidth
decreases,
but
for
each
of
the
new
flowers
there
are
temporarily
boosted
each
and
every
case
independently
of
the
number
of
flows,
and
then
they
all
converge
to
two
to
the
equal
share
as
we
desired
and
there's
another
example
here.
E
This
is
a
very
simple
adaptive
streaming
model,
its
inserts
some
kind
of
dialog
to
the
system,
every
10
second-
and
there
is
a
an
initial
time,
an
initial
download
and
we
show
how
much
time
does
it
take
to
download
one
second
verse
of
video
data
and
because
the
MTS,
the
multi
timescale,
resource
sharing,
boosts
the
initial
pause.
There
is
a
much
faster
startup,
so
the
time
display
is
much
less
and
also
every
time
it.
Yes,
it's
basically
a
lot
about
one
side
and
it
also
feels
a
play
after
but
for
most
foster
stage
it.
E
It
defines
and
implements
mu
times,
E
and
s
for
far
enough
place.
So
so
this
is
this
initial
results
really
promising.
So
we
have
some
kind
of
multi
times
get
fairness
which
works.
There
are
significant
performance
gain,
so
advantage
for
new
clothes
for
the
starting
phase,
and
there
is
also
better
long
time.
Fairness
for
flows-
we
don't
know
if
behavior,
but
there
is
a
quite
amount
of
future
work.
We
are
not
quite
sure
how
it
can
be
used.
What
how
is
it
useful?
So
what
is
the
for
example?
E
What
is
the
practical
number
of
times
has
to
be
to
reach
real
advantages
for
the
users?
How
shall
this
be
dimensioned?
How
shall
we
design
the
move
to
time
scale
sleep
its
value
functions?
We
use
the
simplest
example
week.
Imagine
does
it
make
sense
different
to
use
different
kind
of
policies
in
this
TV
s
word
for
the
Piraeus
timescales
and
and
what
what
we
can
find,
which
can
be
redundant
like
further.
E
G
Hello:
everyone
I'm
Satya
I'm
from
the
University
of
wisconsin-madison,
and
thanks
to
visa
processing
delays
I'm
currently
in
center
Daylight
Time,
which
is
an
hour
behind
Montreal,
and
today
I'm
going
to
be
talking
about
our
work
in
examining
how
current
local
time
is
reported
by
devices
on
the
Internet
next
slide.
Please.
G
Think
you
skipped
a
slide
yeah
yeah,
so
Internet
time.
Synchronization
is
useful
for
a
variety
of
applications
and
these
mechanisms
synchronize
clocks
on
devices
to
a
common
time
scales
such
as
UTC,
to
make
time
more
comprehensible
people
facing
applications
need
to
translate
time
from
UTC
to
current
local
time
and
writing
code.
To
do
this,
translation
is
notoriously
difficult
because
several
factors,
such
as
the
daylight
saving
time
rules
and
changes
to
these
rules
need
to
be
considered
further.
Several
edge
cases
need
to
be
handled
as
well.
G
Next,
like
this,
so
time
zones
originally
alternated
in
the
late
19th
century,
in
order
to
improve
coordination
in
railway
and
Telegraph
networks
and
the
daylight
saving
time
was
introduced
in
1918
in
the
u.s.
in
hopes
of
saving
energy
during
the
first
world
war
and
modern
applications
such
as
calendars
need
to
handle
and
know
about
time
zones
and
the
time
zone
database
is
a
critical
asset
that
contains
data
and
code
that
helps
applications.
Do
this
handling
and
this
database
was
created
by
Arthur
David
Olsen
in
the
early
1980s
next
like
this.
G
G
Definition
for
the
New
York
Times
here,
and
it
says
until
1883,
New
York
was
behind
GMT
by
4:00
hour
and
56
minutes
and
did
not
follow
any
daylight
saving
time
and
until
1967
New
York
was
behind
GMT
by
5
hours
and
had
its
own
rules
for
observing
daylight
saving
times
and
right
now,
the
daylight
saving
time
in
New
York
is
same
as
the
rest
of
the
US
and
below
that
we
have
the
DST
rules
currently,
in
effect,
in
the
u.s.
from
2007.
G
The
rule
says
that
daylight
savings
start
in
March
on
the
second
Sunday,
where
clocks
are
advanced
by
1
R
and
end
in
November
on
the
first
Sunday.
So
information
like
this
is
organized
as
text
files
in
a
typical,
unique
style
and
bundled
with
code
and
reference
implementations
for
C
API
functions
to
handle
time
zones
and
usually
programs
in
the
database.
Next
slide.
Please.
So
this
database
was
placed
in
the
public
domain
by
Olson
in
2009,
and
hence
this
is
not
owned
by
any
entity.
G
The
database
is
currently
hosted
by
IAE
and
the
process
to
update
the
database
is
defined
in
RFC
six,
five,
five
seven.
So
the
database
is
maintained
by
a
community
of
volunteers
and
all
decisions
are
finalized
and
published
by
a
primary
maintainer,
the
current
primary
maintainer
for
the
database,
a
spa
legged
of
UCLA.
The
current
version
of
this
database
has
three
hundred
and
forty
eight
times
owned
records
for
various
time
zones
across
the
internet.
G
Oh
sorry,
across
the
world,
and
this
database
is
used
by
almost
all
major
operating
systems
such
as
various
flavors
of
Linux,
Android,
iOS
and
various
programming,
libraries
by
TZ
and
Jorah
time,
etc.
Next
slide,
please
so
the
actual
rules
for
the
time
zones
and
observing
daylight
saving
time
is
decided
by
the
local,
corresponding
local
government
authorities,
and
this
and
this
information
and
changes
to
such
rules
are
published
either
officially
or
through
the
media
and
when
such
share
changes
or
updates
are
known.
G
The
time
zone
database
community
discusses
such
changes,
the
official
time
zone
mailing
list,
and
once
the
changes
are
finalized,
a
new
release
is
prepared
by
the
maintainer
and
published
at
a
known
location.
Then
it
is
up
to
the
consumers
of
this
database
to
push
these
updates
to
their
end
users,
which
are
typically
done
through
operating
system
updates,
and
any
delay
in
this
process
has
the
potential
to
cause
major
disruptions.
G
For
example,
in
2015
due
to
ongoing
elections,
the
government
of
turkey
decided
to
delay
the
end
of
daylight
saving
time
and
announced
this
change
only
three
weeks
in
advance
to
the
actual
date
of
effect.
So
what
finally
happened
is
the
update
to
the
database
being
pushed
was
delayed
and
the
iOS
update
bundling.
These
changes
reached
the
end
users
only
three
days
before
the
date
of
change.
G
They
also
download
the
entire
mailing
list
archive
and
perform
text
search
on
the
archive
to
identify
problems
and
anomalies
with
the
database
which
will
later
discuss
in
this
talk
to
perform
this
analysis,
we
built
a
Python
based
processor
tool
that
can
compare
consecutive
releases
to
detect
effective
changes
in
zone
and
DST
rules,
and
this
tool
is
also
capable
of
identifying
correction
updates.
That
is,
updates
that
make
changes
to
previous
updates.
So,
in
total,
this
tool
identifies
about
2200,
h2
zone
and
DST
runs
with
400
of
those
labeled
as
correction
updates.
G
Next
slide,
please
so,
based
on
this,
what
we
do
is
we
take
these
updates
and
then
classify
them
into
either
making
changes
to
time
zones
or
to
DST
rules,
and
since
we
can
extract
the
release,
dates
of
this
other
updates
from
the
mailing
list.
We
also
see
whether
these
updates
make
changes
to
time
stamps
in
the
past
or
in
the
future
and
from
the
charts
on
the
right.
We
can
see
that
make
the
majority
of
the
updates
have
to
deal
with
the
DST
rules,
so
this
shows
the
huge
impact
daylight.
G
Please,
then,
we
also
look
at
the
community
by
assessing
the
mailing
list
archives,
and
we
see
that
over
a
span
of
30
years,
we
have
about
19
thousand
emails
in
total
from
1,800
unique
contributors
from
the
histogram
on
the
right.
We
see
an
increasing
trend
in
the
number
of
emails
and
contributors
after
2012,
particularly
after
the
adoption
by
I
am,
and
this
trend
is
correlated
with
an
increasing
amount
of
the
database,
particularly
after
the
adoption
of
mobile
and
smart
devices.
G
Ingesting
information
from
such
a
large
number
of
contributors
is
a
potential
concern
from
a
management
perspective.
Next
slide,
please
so
earlier
we
saw
that
so
most
of
the
updates
deal
with
DST
rule
changes.
So
then
we
wanted
to
listen
up
both
these
changes.
Why
do
we
see
so
many
changes?
Stevens?
G
We
analyzed
the
sorry
also
to
evaluate
this
hypothesis.
We
take
the
most
recent
version
of
the
data
base
and
count
the
number
of
updates
made
to
each
time
zone
or
DST
rule.
We
generate
a
histogram
of
these
changes
which
are
made
by
the
government
entities,
and
then
we
group
these
time
zones
by
country,
and
then
we
look
at
the
history
of
the
country
question
so
in
this
example
shown
here.
G
So
we
see
the
histogram
of
rule
changes
for
North
America
and
as
we
hypothesized
after
eliminating
all
noise,
we
see
that
a
huge
number
of
changes
correspond
to
major
events
such
as
the
world
war
and
policy
updates,
such
as
the
uniform
time
Act
or
the
aisle
embargo.
That
was
in
effect
during
the
energy
crisis
in
the
u.s..
So
in
addition
to
proving
our
hypothesis,
this
also
shows
that
that
the
time
zone
database
provides
a
unique
perspective
on
world
historical
events.
G
We
see
similar
results
from
time
zones
belonging
to
other
countries,
which
is
not
shown
here
for
space
constraints.
Next
slide,
please.
So
next
we
try
to
identify
the
problems
and
anomalies
that
are
related
to
the
database
updates.
Our
intention
here
is
to
highlight
the
importance
and
the
impact
of
these
database
updates.
So,
as
mentioned
earlier,
about
19
percentage
of
the
updates
are
flagged
as
correction
updates
by
our
tool.
G
These
errors
highlight
the
problems
in
the
informal
update
process
that
has
ingest
data
from
a
large
number
of
contributors,
and
sometimes
we
see
that
even
correct
database
updates,
making
unintended
or
causing
unintended
bugs
and
other
software
systems.
For
example,
a
provision
to
include
a
negative,
a
time
offset
in
the
database
broke
several
software
systems,
such
as
open
JDK,
the
Qt
framework,
etc.
G
Another
issue
that
we
noticed
in
the
database
are
the
updates
is
the
disruptions
caused
by
delayed
updates,
which
we
saw
earlier
next
slide,
please
so,
based
on
our
analysis,
we
have
come
up
with
a
set
of
recommendations
which
are
intended
to
improve
the
security
and
integrity
of
the
overall
system.
Our
intention
here
is
not
to
impugn
any
individuals
who
are
indeed
contributed
significant
time
and
energy
in
the
upkeep
of
the
database.
G
Rather,
our
goal
here
is
to
expand
perspectives
and
to
start
discussions
about
this
critical
asset
and
as
such,
we
do
not
provide
any
implementation
for
our
accommodation,
because
these
recommendations
could
be
implemented
using
standard
open-source
tools
and
other
commutations
are
intentionally
high
level.
We
believe
that
the
details
should
be
fleshed
out
within
the
dzt
community.
Next
slide,
please.
G
So
our
first
recommendation
is
to
codify
the
overall
update
process
by
introducing
more
formalization,
such
as
standard
release,
tools,
standard
documentation,
ticketing
system
to
track
each
and
every
update
and,
most
importantly,
a
test
suite
to
ensure
correctness
of
the
database.
We
recommend
that
these
updates
are
treated
in
the
same
way
as
Linux
kernel
patches
and
then
the
overall
update
process
is
extensively
extensively
really
relies
on
the
primary
maintainer
or
coordinator
of
the
database,
so
for
immortalized
motivated
attacker
or
a
government
entity.
G
So
we
recommend
stronger
cryptographic
measures
to
improve
security,
and
we
also
believe
that
introducing
more
formalization
will
also
help
in
this
regard
by
introducing
more
transparency.
And
finally,
we
also
recommend
an
independent
third
party
audit
of
this
database,
which
could
be
contacted
periodically
and
could
test
the
database
from
the
perspective
of
end-users
from
different
time
zones.
Next
slide,
please.
G
So
in
summary,
we
examine
how
the
time
zone
database
is
used
by
devices
connected
on
the
internet
and
how
it
has
evolved.
We
also
look
at
the
maintenance
and
the
update
processes,
and
we
see
that
daylight
saving
time
has
a
huge
impact
on
the
update
process
and
we
also
see
a
huge
amount
of
effort
gone
into
maintaining
the
historical
accuracy
of
the
database.
Based
on
our
observations,
we
come
up
with
a
set
of
proposals
that
will
aimed
at
enhancing
the
security
and
integrity
of
the
database.
G
H
G
G
G
G
C
G
So
so,
some
of
the
time
zones
could
be
selected
where
the
test
could
manually
checked
for
offsets.
We
could
the
test
could
pick
certain
time
zones,
for
example
the
central
Daylight
Time,
where
the
offset
is
well
known
and
then
make
sure
the
data
is
indeed
consistent
with
that.
So
randomly
time
zones
across
the
world
can
be
picked
and
then
could
be
checked
for
consistency.
G
F
I
just
have
to
say
thank
you
for
doing
this
work.
Excuse
me,
my
dogs
is
kind
of
leaving
me,
but
this
work
is
I
think
super
useful
because
it
shines
light
on
hidden
parts
of
the
internet
that
we
take
for
granted
and
I
disagree
with
the
first
person
here,
the
Mike
who
said,
if
there's
been
no
attack.
F
G
I
think
the
RFC
six
five
five,
seven
that
describes
the
update
process
could
be
looked
at
again
because
some
of
the
processes
are
left
at
the
discussion
of
the
user
and,
for
example,
something
like
using
PGP
signatures
to
sign
the
database
is
left.
The
text
just
mentions
that
the
signature
should
be
used
not
must
be
used,
so
some
of
these
security
measures
could
be
made
mandatory
and
then
the
Archie
could
be
looked
at.
So
that
is
something
that
I
ATF
can
do.