►
From YouTube: IETF99-NMRG-20170717-0930
Description
NMRG meeting session at IETF99
2017/07/17 0930
https://datatracker.ietf.org/meeting/99/proceedings/
B
And
we
are
organizing
this
workshop
here
on
measurement
based
for
ready
the
six,
so
we
started
originally
with
looking
only
at
NetFlow
wall
or
maybe
six
space,
and
we
sill
for
two
year
four
to
the
upper
two
years.
We
extended
it
to
general
measurement
base,
so
this
is
a
we
call
it
a
rug
shop.
So
we
have
six
speakers
giving
insight
in
the
current
work
on
using
networking
right
few
measurements
for
network
management
and
of
course
this
is
not
a
conference
yet
so
we
hope
that
there
will
be
a
lot
of
interaction.
B
C
B
B
B
D
D
So,
as
I
already
mention
up
from
sasnett
Czech
national
religion,
Education
Network,
most
Bisset,
more
specifically
I
work
for
my
broader
group
and
our
main
goal
is
to
monitor
the
perimeter
of
our
network
and
to
guarded
perimeter
from
external
or
internal
attacks
that
we
can
detect
from
monitoring
these
these
lines.
So
we
have
vast
tool
set
to
do
so
from
Network
probes,
so
the
data
acquisition
from
metro
grinds
through
collection,
detection,
storage,
visualization
and
remote
configuration
of
the
whole
in
this
presentation.
D
I
want
to
speak
about
network
probes
and
how
we
collect
date
on
how
we
analyze
the
packets.
So
the
standard
approach
to
monitoring
probes
is
you
have
a
network
interface
card
that
provide
packet
capture
and
on
those
capture,
packets.
You
do
some
software
processing
on
a
CPU
of
a
host
server.
This
is
the
standard
approach.
We
have
an
accelerated
approach
when
the
probe
itself
has
a
accelerator
in
FPGA,
and
each
accelerator
can
somehow
eight
the
processing
of
the
packet.
D
So
the
software
performs
only
very
specific
or
very
advanced
processing
tasks
and
the
card
can
accelerate
the
total
throughput
of
the
system.
We
call
this
concepts
of
35
monitoring.
The
concept
is
here
on
the
right.
You
have
your
processing
applications,
your
flow
exporter,
that
that
exports
NetFlow
data
in
IP
fix
or
net
four
formats.
It
receives
data
from
the
card
and
in
the
card
the
card
received
packets
from
the
line
and
don't
send
only
whole
packets
to
the
software
applications,
but
the
software
applications
can
instruct
the
card
to
do
some
processing.
D
It
can
either
parse
the
packets
and
send
only
headers
instead
of
whole
packets
or
we
have
flow
cash
in
the
hardware.
So
some
kind
of
net
flow
records
or
aggregated
records
for
flow
can
be
also
sent
to
the
software
instead
of
whole
packets
and,
as
I
said,
this
is
all
controlled
by
the
application.
So
it's
flexible
depending
on
application,
you
can
have
different
requirements
how
this
data
should
be
pre-processed,
so
we
measured
the
achievable
speed
ups
on
various
network
monitoring
tasks
and
we
proved
to
first
under
a
net
flow.
D
We
have
like
5
times
beta
for
application,
layer,
processing,
mainly
HTTP
processing.
We
have
speed
up
to
like
3
times
compared
to
the
standard
monitoring
probe,
but
when
the
software
need
to
process
every
packet,
we
already
publish
those
results
at
infocomm
conference
and
in
our
three
fully
transactions
own
computers
journal.
So
now
the
question
is:
can
we
use
this
system,
this
software-defined
monitoring
system
to
accelerate,
also
IDs,
so
not
only
network
monitoring
but
IDs.
D
So
our
assumptions
are
that
a
current
IDs
systems
are
not
fast
enough
for
current
high
speed
networks,
also
that
before
discarding
method,
that
they
use
basically
blind
discarding
when
you
have
an
input
buffer
and
if
the
ideas
is
not
fast
enough.
These
buffers
start
to
blindly
lose
packets
is
not
good
for
the
detections
of
IDs
and
informed.
D
Discarding
can
be
better
and
perform
better.
So
another
assumption
is
that
usually
the
attacks
or
the
threats
are
present
in
the
initial
packets.
So
usually
you
don't
find
a
threat
insider
in
the
middle
of
a
let's
say
long,
video
stream,
but
somewhere
in
the
beginning
of
each
network
connection.
So
in
the
last
assumption
is
the
heavy
tail
character
of
flow
sizes
of
network
connection
sizes.
D
When
you
have
a
very
few
heavy
flows
that
carry
mate
majority
of
all
the
network
traffic,
so
by
dropping
only
the
middle
packets
of
these
few
heavy
flows,
you
can
get
a
pretty
decent
reduction
of
total
network
traffic
that
is
processed
by
the
CPU.
So
these
are
the
assumptions
using
SDM,
basically
to
accelerate
IDs
we
use
as
as
a
pre-filter,
so
we
don't
need
a
flow
kitchen
hardware.
Our
application
is
IDs,
but
the
system
stays
the
same.
Basically,
so
it's
only
a
different
use
case,
so
we
don't
need
to
modify
the
SDM
concept.
D
We
just
use
it
for
new
use
case,
so
our
test
setup
was.
We
have
two
servers.
Sorry,
we
have
a
traffic
generator
with
a
large
pickup
files
on
an
SSD
drive
which
we
replayed,
but
using
TCP
reply
over
a
standard
network
interface
card.
These
were
these
are
pickups
collected
from
our
network
so
that
it's
a
real
traffic.
It's
not
synthetic!
It's
real
traffic,
so
the
reply
traffic
was
then
delivered
to
our
acceleration
card.
D
In
that
card
we
have
an
SDM
firmware,
so
the
hardware
acceleration
part
that's
for
valid
packets,
IDs
software
and
the
software
instructed
the
SDM
to
do
the
pre
filtering.
We
tested
to
idea
systems
su
ricotta
and
snort
the
testing
server.
Just
the
quick
stats
we've
got
to
physical
Intel
processor
V,
with
total
of
16
16
cores
at
2.6,
gigahertz,
a
lot
of
RAM
and,
of
course,
our
acceleration
card
with
Sdn
firmware,
as
I
already
mentioned,
so
with
the
acceleration
with
the
accelerator
on
the
FPGA
on
the
cart
so
for
some
results.
D
First
of
all,
it's
snort
over
a
standard,
bleep
backup
interface.
Now
to
describe
the
graphs
on
the
x-axis
of
both.
We
have
a
speed
of
the
input
link,
so
here
it's
from
few
few
hundred
megabyte
bits
per
second
to
do
gigabit
per
second
and
on
the
higher
graph
we've
got
buffer
overflow
percentage,
so
percentage
of
lost
blindly
lost
packets
on
the
input
of
the
ideas
system
due
to
its
lack
of
performance
on
even
speed
and
on
the
lower
graph.
We
have
number
of
detected
events
on
that
speed.
D
The
recap
was
the
same,
so
the
number
of
detective
events
should
be
same
if
the
detection
is
hundred
percent
correct,
but
due
to
we
can
see
from
the
graph
that,
due
to
blind
lost
of
decades,
the
detection,
the
number
of
detective
events
is
decreasing.
Also,
so
if
we
take
a
look,
the
Green
Line
is
the
ideas
snort
without
our
acceleration,
so
pretty
quickly,
even
at
the
two
hundred
and
fifty
megabits
per
second,
the
drop
rate
starts
to
rise
and
therefore
the
detection
rate
drops
using
our
accelerator.
D
The
drop
is
around
zero
up
till
three
times
larger
speed,
so
up
till
750
megabits
per
second
and
then
start
to
slowly
rising
as
well,
so
the
detection
rate
is
better,
then
without
SDM.
So
this
is
what
we
want
to
achieve
better
detection
rate,
so
the
ideas
need
to
process
only
packets
that
are
interesting
and
therefore
can
detect
more
threads.
D
Another
similar
graphs,
but
this
time
for
suricate
are
already
picked
up.
We
can
see
the
same
trend
basically
without
SDM.
The
drop
rate
starts
very
quickly
with
SDM.
The
drop
rate
is
not
not
so
significant,
and
also
the
detected
number
of
detected
thread
is
not
declining
so
rapidly.
So
we
can.
We
can
use
this
ideas
on
even
higher
speeds,
with
more
precise
detection
rates.
D
D
We
use
all
the
rules
that
are
available
for
the
Sarika
tights
like
13,000
rules,
but
for
this
test
we
reduce
it
only
to
know
their
detection,
so
the
number
of
rules
was
only
1,000
or
something
like
that,
and
the
performance
is
even
better
even
more
than
10
gigabit
per
second
up
to
20,
and
we
can
see
basically
the
same
trend
as
the
drop
rate
is
increasing.
The
detection
rate
is
decreasing
and
again
with
SDM.
We've
got
better,
better
performance,
better
detection.
D
Right
so
to
summarize,
ideas
can
be
accelerated
using
our
SDM
system,
with
reduction
of
packet
loss,
we
increase
the
accuracy
of
detection.
So
from
the
other
side,
the
SDM
is
applicable
not
only
to
network
monitoring
but
also
to
network
security,
specifically
IDs
acceleration,
and
we
are
preparing
in
another
infocomm
paper
about
this
duration.
So
this
was
just
a
parallel
preliminary
results
for
this
paper.
So
thank
you
for
your
attention.
E
D
We
have
a
variable
threshold
on
which
packet,
which
we
start
to
drop,
and
it's
usually
around
like
first
20
packets
are
analyzed,
and
then
we
start
to
drop
or
30
or
10,
but
usually
it's
between
20
and
30.
Okay,
the
decobray
around
with
this
parameter,
and
that's
it
if
we
play
around
basically
after
20
or
30,
the
number
of
drop
packets
is
basically
the
declining,
but
the
number
of
detective
events
is
staying.
The
same
okay,
okay,.
E
D
E
F
F
D
D
F
D
F
D
I
mentioned
the
Snowden's
ricotta
snort
has
a
limited,
limited
support
of
some
extensions.
That's
why
we
only
tested
it
so
far
with
withstand
a
really
backup
interface,
but
for
sericata
you
can
implement
your
own
plugins,
basically,
input
plugins
or
some
different
kind
of
plugins.
That
can
take
the
data
for
you
and
do
some
reprocessing
with
them.
So
using
the
plugins
using
the
API
that
is
already
in
there,
we
was
able
to
extend
the
system
with
SDM
acceleration.
D
B
G
G
So
the
the
context
of
the
work
is
where
we
are
focusing
the
network
monitoring
data
which
is
usually
used
for
security,
forensics
and
anomaly
detection.
The
goal
is
to
notify
malicious
activities
regarding
traffic
patterns
behind
that
and
the
alerts
have
triggered
this
Masha's
activity.
We
focus
on
specific,
but
this
modality,
which
it
ibf
that
work
background
radiation.
Basically,
data
could
have.
This
data
is
coming
from
networking
scopes
like
net,
usually
it's
noisy
traffic
but
important
source
of
forensics
data.
We
have
many
information
that
we
collect
thorough
list
at
work,
disk
up,
Santa
and
magnets.
G
The
data
is,
will
consider
a
volume
and
wide
range
of
services
and
sources.
The
extraction
of
structure
and
compose
of
this
data
is
difficult
because
it's
noisy
and
then
complete
information.
We
have
all
the
packets.
We
are
with
not
closed
because
it's
passive,
so
we
just
receive
packets
from
the
internet,
so
they
usually
the
GAO,
learn
to
study
this
kind
of
traffic
to
predict
and
model
internet
malicious
activities.
Basically
large-scale
scallop
order
or
the
dose
the
I
don't
see.
The
night
observe
ourselves.
G
C
G
G
Another
work
was
done
by
more
efforts
in
2006,
where
they
studied
the
probability
to
observe
it
was
attacks
within
military
scope
and
withdraw
and
co-authored
in
2010
made
another
world
to
characterize
internet
background
traffic
over
multiple
darknets
extract
environment,
features
and
level
of
pollution
of
the
station
of
the
addresses.
This
is
the
first
bug
report
regarding
military
characterizations
idea:
more
works
where
they
focus
upon
BACnet,
the
data,
basically
the
world
to
first,
first
care
about
identity
and
other
work
regarding
DNS
queries
in
2015
by
absolute,
the
same
authors
regarding
facilitation.
G
Network
is
little
bit
different
because
we
applied
new
technique.
This
technique
is
based
on
topological,
topological
data
analysis,
its
TBA
or
today
it's
a
branch
of
mathematics
where
the
goal
is
to
study
high,
dimensional
and
complex
data
by
extracting
invariably
emetics
feature
from
this
data
to
discover
relationships
and
patterns.
So
the
goal
is
to
study
large-scale
data,
multi-dimensional,
look,
stacked,
ovarian
agility
and
money.
The
PDA
has
some
fundamental
properties
very
interesting,
which
are
coordinate,
which
coordinate
invites
some
tennis.
G
That
does
not
depend
on
coordinate
system
analyze,
which
our
case
interesting,
because
we
can
analyze
data
connected
from
different
platforms.
It
is
the
formation
of
ice,
so
we
so
it
is
less
and
insensitive
to
noisy.
Even
the
data
is
noisy.
There
is
no
problem
with
this
beta.
We
can
apply
to
DA
and
it
is
able
to
handle
approximate
data
and
the
third
form
that
we
obtained
using
PDA
the
compressive
version
of
the
day
press
the
presentation.
So,
basically,
will
it
take.
G
So
here
our
example
is
not
related
to
that
working,
but
we
take
CD
shape,
which
is
Rabbitohs
first
alpha,
and
here
we
have
as
input
data
a
3d
point
cloud
which,
with
many
many
number
of
points
in
this
case
we
used
the
photonic
function,
which
is
example
X
and
recipe.
So
this
function
will
allow
us
to
add
another
dimension
to
the
data,
and
this
dimension
will
filter
the
data,
its
filtering
function.
They
will
explain
later
switcher
and
the
output
will
be
in
at
work,
paragraph
18
edges
that
represents
drop.
G
G
G
So
they
to
apply
to
be
a
our
use
case,
so
we
extracted
some
noisy
traffic
monitoring
data
from
the
darknet
that
used
our
lap
and
we
apply
the
one
technique
from
the
DDA,
which
is
the
Moffatt
algorithm.
The
algorithm,
as
explained
before,
will
allow
us
to
obtain
this
particular
graph.
This
graph
represents
the
data
instead
of
linear
many
points,
so
the
the
processing
step.
So
we
have
the
darknet
where
we
extract
some
features
related
to
the
packets.
Basically,
the
issues
the
address
destination
of
the
address,
the
ports
and
TV
stuff
compute.
G
G
So
this
is
the
details
of
the
mapper
algorithm,
so
the
input
is
the
feature
vector.
It's
contained
the
feature
of
the
packets
from
the
back.
Yet,
basically,
in
this
case,
we
used
that
the
timestamp,
the
source
destination
IP
address
and
the
parts
of
the
protocol
TCP
parameters
of
the
algorithm
are
the
ones
that
we
used
to
split
the
values
for
the
filter
fraction.
G
But
the
technique
is
really
flexible,
so
can
use
and
other
types
of
reflection,
the
data,
our
data
from
the
trench
and
the
package
for
future
packets.
This
OVA
deployed
over
the
intervals
that
we
define
on
the
filtering
function
and
then
last,
but
each
using
the
bisque
and
clustering.
Technically
we
can
use
another
one,
but
in
this
case
we
used
the
bisque
and
what
each
vertex
is
a
pastor
of
a
be
the
cluster.
The
containers
using
to
the
discard
and
LG
represents
a
non-empty
intersection
between
clusters,
so
good
bye.
G
G
G
We
apply
that
this
technique
over
6,
for
example,
1000
like
it's.
This
is
the
parameter
and
we
extracted
some
patterns,
for
example,
this
that
we
expected
result
scanning
activities,
but
we
check
if
this
money
and
we
found
that
business
can
get
and
the
success
active
access,
some
sparse
cuts
and
some
randomized
and
those
discounts,
but.
G
Some
scanning
activities,
probably
like
that
and
we
applied
also
Sirica
tap
on
the
same
data,
and
we
observe
that
we
are
able
more
than
sericata
only
five,
so
first
can
I
get
DVD,
but
for
scanning
activities
using
this
list,
the
parameters
are
estimated
using
an
error
method.
So
we
have
tried
when
we
find
something
interesting
and
but
when
we
find
them,
it
remains
them.
So
we
can
apply
them
more.
Data
sets
and
we
find.
G
G
G
So
as
a
conclusion
and
future
work,
so
in
this
work
we
presented
this
new
technique
that
we
are
using
to
extract
potatoes
from
a
darknet
traffic.
Basically,
this
technique
used
to
function.
The
first
one
is
this
filter
function,
who
represent
one
or
more
one
or
more
dimension
dimensions
to
filter
this
data
and
apply
the
number
of
intervals
and
the
never
an
overlap
from
our
recent
advance
and
also
a
partial
clustering
using
the
Biskind
technique.
G
G
G
How
to
apply
decent
qualitative
is
technical,
different
types
of
data,
basically
data
from
networking,
and
we
want
to
know
more
on
this
variety.
How,
basically
it's
cluster
data,
but
it
is
more
because
we
clustering
in
each
interval-
it's
not
the
global
clustering,
so
also
this
one,
for
the
reason
that
we
want
to
validate
and
obtain
the
good
results,
but
I
agree
that
we
need
to
compare
this
to
other
peoples,
my
accessibility
techniques.
What,
as
we
have
even
can
use
or
just
discard?
G
G
H
G
E
G
We
can
this
is
one
of
the
future
work
that
we
want
to
do
is
to
apply
the
same
technique
for
I
mean
protective
networks,
the
packets,
or
even
that
flows
of
looks
track.
Some
some
patterns
that
will
be
abstract
some
services,
but
we
need
to
find
the
good
parameters
for
the
ability
or
extract,
as
some
scanning
activities
target
replies
the
network.
So
the
technique
is
opening.
We
can
apply
this
I.
G
E
G
E
I
B
J
Okay
morning
my
name
is
Michele.
Santos
I
am
a
professor
in
Brazil,
and
the
objective
of
this
presentation
is
talking
about
some
as
the
end
placement
challenges.
So
probably
everybody
here
everybody
heard
about
at
the
end.
The
objective
is
not
it
a
class
about
a.cian,
but
he
a
briefing
reveal
about
us
and
it's
important
to
have
in
mind
that
we
have
a
separation
of
the
photo
plane
in
the
data
plane
in
stage
we
have
any
cheat
codes
as
in
controller
and
it's
a
centralized
Inglot
in
a
nutshell.
J
Sdn
is
a
paradigm
that
the
cop
of
the
photo
plane
from
the
data
plane.
So
we
have
this
famous
figure.
In
a
summary,
we
have
three
layers:
the
network
elements,
the
data
plane,
the
Kanto,
Plain,
Jane
controller
means
software's,
like
parts
not
good
light
of
the
life
and
so
on,
and
some
Sdn
applications
that
will
run
over
the
controller.
J
So
we
need
to
have
a
different
view
about
how
to
treat
scenarios
of
optimization
when
we
take
when
we
think
about
Sdn.
We
have
now
a
single
point
of
failure
ease
and
we
needed
to
think
about
dependability
and
knowledge
in
a
different
way.
We
will
have
overhead
mainly
in
the
SDN
controller,
and
we
need
to
think
in
a
different
way
about
planning
and
provision,
and
we
have
some
problems
like
the
flows
at
a
time
flows
at
a
time,
and
we
need
to
think
about
well
place.
J
So
we
have
a
lot
research
question
related
to
this
scenario.
For
example,
what
is
the
best
placement
position
for
each
Sdn
controller?
What
is
the
curse
of
each
controller?
What
should
it
be
the
capacity
of
each
controller?
What
should
it
be?
The
capacity
of
its
controller,
how
to
manage
the
rules
and
the
network
policies
in
a
centralized
way
how
to
deploy
virtual
light,
as
in
so
how
to
categorize
this
problem
in
each
way
we
propose
in
this
presentation
and
probably
draft
three
men
placement
problems
about
Sdn,
as
in
controller
placement
problem
has
been
ruled.
J
Placement
problem
has
been
hypervisor
placement
problem.
I
was
trash-talk
talking
about
Sen
controller
placement
problem,
it's
a
famous
problem
problem
and,
in
summary,
a
steam
controller
and
placement
problem.
All
CPP
deals
if
the
allocation
of
a
theme
in
trollese
in
a
network.
It
seems
easy
question,
but
it's
not.
When
we
think
about
this
problem.
We
need
to
decide
how
many
controllers
are
required
to
support
a
network
where
to
place
each
Sdn
controller
and
what
is
the
controller
demand?
In
other
words,
in
this
trigger?
J
In
the
right
side,
we
can
see
the
controller
1
and
we
have
a
lot
of
switches,
switches,
assigning
it
to
this
controller
and
it's
Adam,
and
so,
when
we
think
about
optimization
and
the
objective
function,
it's
not
polynomial
problem,
it's
an
np-hard
problem
and
we
have
several
papers
about
this
problem
like
if
we
quick
search
for
Ashley
and
control
placement
problem
in
scholar
Google.
We
can
find
hundreds
of
papers
about
this
problem.
J
So
the
second
problem
is
as
the
SDN
roll
placement
problem.
It
seems
similar
to
the
last
problem,
but
it's
not
when
we
have
applications
running
over
the
controller
and
when
I
talk
that
it's
not
one
application,
but
several
occasions
we
needed
to
run
the
conflicts
we
needed
to
to
the
science
the
best
path,
and
it
seems
ok,
old
problem,
but
you
think
about
that
with
a
simple,
with
same
controller
with
at
the
end
paradigm,
we
can
think
about
how
to
do
the
load
balance.
At
the
same
time.
J
Save
image
turn
off
some
links
and
aggregating
some
flows
and
a
specific
link
and
everything
in
our
we
have
time:
it's
not
a
user
problem
that
you're
solving
a
real
time
when
you
have
a
huge
network
and
several
nodes.
So
we
have
different
types
of
rules
like
access
control,
world
policies,
track,
shaping
thirteen-year,
load
balancing,
and
we
need
to
do
everything
in
a
real-time
and
have
the
conflicts
and
policies.
So
what
do
you
think
about
this
problem
like
optimization
problem
and
use
some
techniques
like
integer
linear
program
of
some
areas?
J
So
the
last
problem
is
Sdn.
Hypervisor
placement
problem
in
summary,
hypervisor
placement
solutions
deals
is
how
men
hypervisor
instance
are
needed
and
deposition,
it's
very
similar
to
the
SDN
controller
placement
problem.
The
first
problem
that
I
talked
to
two
minutes
ago.
So
here
is
an
example.
With
a
visor
or
I
can
talk
about
slow-mo
we
have
a
hypervisor
and
the
hypervisor
will
create
abstraction
for
each
controller
in
the
infrastructure.
I
J
It's
similar
to
the
controller
placement
problem,
but
the
placement
of
the
hypervisor
demands
detailed
investigation,
because
some
question
will
arise
like
wire
to
the
fly,
the
Sdn
hypervisor.
We
need
to
think
about
hypervisor,
reliability
and
fault
tolerance,
because
we
will
have
another
layer
of
software
and
the
Skibo
hypervisor
design
and
you,
okay,
I,
talk
to
you
about
some
problem
in
there
and
I
tried
to
define
some
problems
about
relatedly
to
at
the
end.
But
the
question
is
how
to
solve
these
problems.
What
is
the
best
technique
to
solve
these
problems?
And
the
answer
is
its?
J
J
So
in
summary,
we
have
two
categories:
two
categories,
exactly
resolution
methods
that
give
as
optimal
solutions-
and
we
have
a
time
consume
well
a
high
time
McCracken.
Some
examples
technically
is
Brenton
about
dynamic
program
and
linear
and
entry
program.
We
have
here
a
trade-off
because
we
have
exact
methods
to
solve
these
problems,
but
to
retake
time.
On
the
other
hand,
we
have
a
rich
fish
with
quads
ultimate
solution:
reasonable
competition
times,
like
some
techniques
like
Jeanette,
Jeanette,
Jeanette,
algorithms,
simulated
annealing
and
ant
colony,
optimization.
J
So
one
normal
question
that
arrives
from
this
presentation
is:
okay:
I
have
a
iesson
placement
problem.
What
is
the
best
techniques
to
solve
this
problem?
That's
not
the
best
here
is
the
example.
We
have
a
paper
published
in
the
pupil,
I
communication
error
and
it's
paid.
This
paper
is
from
2012
and
to
use
a
linear
program
to
solve
this
problem.
We
have
a
model,
and
here
we
have
another
paper,
and
this
paper
dynamics,
controller
provision
and
software-defined
networks
and
they're
all
of
autos
proposed.
J
The
risk
is
Bayes,
it
simulated
annealing
for
the
same
problem,
but
in
this
case
we
have
dynamic
placement.
So
we
need
to
have
quick
solution.
We
need
generate
a
quick
solution
most
in
real
time,
because
it's
a
dynamic,
I
analyzed
the
load
in
the
network
and
decides
where
to
put
my
to
create
Sdn
controller
dynamic,
so
the
proposes
creates
an
informational
information
of
their
FG,
didn't
find
at
the
emplacement
problems,
challenge
and
solution
directions,
and
that
is
thank
you.
B
Okay,
so
this
was
a
presentation
quite
different
from
the
Odyssey,
also
represented
solutions,
while
you
are
presenting
a
problem,
so
maybe
the
really
good
idea
to
from
question
offer
more,
for
what
do
you
think
is
anything
missing?
We
do
the
efforts
on
some
feedback
to
give
some
hints
where
to
how
to
start
and
so
on.
K
Often
you
know
where
it's
from
it's
really
quite
interesting,
because
actually,
yes,
this
is
a.
This
is
a
Florida
Sdn
lesson,
but
this
is
the
sole
program
we
find
in
in
other
topics
related
to
network
management.
For
instance,
in
my
company
we
work
in
optimizing.
You
know
that's
a
monitoring
infrastructure
based
on
containers,
which
is
also
you
have
to
do
quite
the
same
thing.
You
have
to
optimize
the
way
you
locate
resources
and
in
place
them,
and
this
is
also
problem
for
I
guess.
You
know
indebted
centers
with
VM.
I
K
L
So
I'm,
just
relaying
a
question
from
Carter
Schmidt
is
asking:
how
cannot
work
measurements,
help
in
solving
problems
you
raised
in
this
presentation
and
where
the
measurements
should
come
from
yep?
So
how
can
network
measurements
help
on
solving
the
problems
you
raised
in
your
presentation
and
where
shall
do
you
measure
those.
J
B
B
We
will
have
now
a
short
break
off
around
20
minutes,
so
we
will
continue
5:00
to
11:00.
So
you
see
we
have
a
very
precise
time
plan
and
then
we
will
continue
with
some
screen
X
presentations.
You
will
notice
that
the
X
representations
are
from
from
industry
from
from
companies
and
institutions
why
the
first
three
were
from
universities.
So
let's
see
what
will
come
up
interesting.
Thank
you
see
you
in
30
minutes.
C
I
M
Okay,
hi
everyone,
everyone
seated
hey.
My
name
is
Martha
Fitch
and
I
work
for
Italian
labs,
and
we
are
the
R&D
team
of
the
Dutch
registry.
The
country
code,
top-level
domain
and
I
may
be
representing
the
industry,
as
Roman
said
so
nicely,
but
we
are
enough.
Profit
organization
yesterday
make
things
clear,
and
today
I
would
like
to
talk
briefly
with
you
about
our
spin
projects.
The
spin
project
stands
for
a
security
and
privacy
in
the
Internet
of
Things.
It's
a
project.
We
started
recently
at
Pasadena
labs
and
it's
about
the
intent
of
things
now.
M
I
could
argue
with
you
about
what
the
definition
is
of
the
Internet
of
Things.
We
have
been
struggling
with
that
internally.
Quite
a
while.
There
is
a
great
definition
of
the
I
Triple
E
organization.
It
is
only
86
pages
long,
so
you
can
have
a
look
at
that
if
you
like,
but
I
like
the
approach
of
rc7
452,
but
whatever
the
definition
is
of
the
internal
things,
we
all
agree
that
there
will
be
plenty
of
it.
There
is
a
lot
of
IOT
coming
towards
us,
both
in
our
homes
as
well
in
the
enterprises.
M
There
are
many
research
activities
trying
to
estimate
the
growth
here
is
one
made
by
garden.
The
figures
may
differ,
but
everyone
agrees
is
going
to
be
a
lot
of
IOT
and
and
with
that,
we
will
also
see
quite
a
number
of
problems,
because
all
these
devices
entering
our
networks,
all
many
of
them-
has
their
limitations
like,
for
example,
they
are
cheap
manufactured
cheap
at
for
security.
They
have
standard
passwords,
hard-coded
fess
or
it's
telnet
ports
being
open.
Sometimes
you
can
bypass
security,
but
just
going
to
a
direct
link,
so
in
general
there
is.
This.
M
Is
this
agreement
that
it's
it's
a
bit
of
a
mess
and
also
a
security
nightmare,
and
we
have
also
seen
evidence
of
this.
This
is
the
famous
mirai
botnet
that
you
may
have
heard
of
last
year
targeted
against
time.
We
were
particularly
worried
about
this,
because
Dyne
is
a
DNS
provider
and
we
also
run
a
DNS
infrastructure
for
Cottonelle,
and
this
attack
brought
many
important
services
down,
such
as
PayPal
for
the
finest
let
Romania
and
since
we
also
run
this
DNS
a
DNS
infrastructure
for
Darnell.
This
particularly
triggered
us.
So
we
came
together.
M
M
We
particularly
targeted
home
users
because
we
believe
that
in
the
enterprise
world,
there's
already
already
a
lot
of
things
happening
with
intrusion,
detection
systems
and
stuff
like
that.
But
if
you
look
at
the
home
using
his
router,
all
ACP
device
or
a
home
browser
basically
has
is
a
simple
manner
of
finding
some
firewalls.
So
that's
how
the
spin
project
was
born.
M
If
your
IP
camera
starts
scanning
for
22
or
less
that's
unusual,
then
what
we
would
also
like
it
once
these
anomalies
are
detected,
that
they
are
automatically
blocked
so
suspicious
of
the
traffic
from
empty
Intel
things
devices.
We
hope
to
automatically
block
them,
and
we
also
want
to
inform
an
average
user
about
them.
I
mean
not.
Every
user
is
spec.
M
M
Our
main
motivation
for
starting
this
project,
because
it's
a
little
bit
out
of
our
league,
perhaps
is
because
we
want
to
protect
the
infrastructure
of
operators
or
DNS
operators
and
other
operators
in
particular.
Of
course,
our
own
infrastructure
United
direct
benefits.
If,
if
this
problem
gets
tackled
in
one
way
or
another,
we
also
have
this
booth
of
the
Internet's
mentality.
M
M
M
So
we
want
to
do
as
much
as
processing
on
the
box
itself
as
possible
and
again,
as
I
said
earlier,
we
would
like
to
allow
the
users
and
average
enough
affects
everyone's
to
configure
a
system
at
40
to
certain
security
preferences,
and
what
we
are
also
thinking
about
is
to
initiate
some
sort
of
collaborative
initiative
to
work
together
with
a
group
of
maybe
security
related
people
to
define,
perhaps
certain
security
profiles.
Think
of
a
security
profile
that
matches
a
smart
TV.
For
example,
a
smart
TV
has
a
certain
behavior
and
it
would
be
art.
M
I
M
M
The
three
main
components
of
this
thing
is:
are
the
three
modules
on
the
Left
a
module
to
visualize,
threatening
to
let
the
user
to
make
it
user
aware
of
what's
happening
in
network
and
also
there
is
a
control
panel
that
it
allows
the
user
to
configure
certain
settings.
There
is
a
module
that
is
supposed
to
monitor
devices
for
our
behavior
and
there
is
a
module
to
control
traffic,
basically
viola.
M
M
Refrigerator
all
your
television,
what
happened
and
again
as
I
said,
the
processing
is
done
locally,
so
the
device
is
not
sending
behavior
of
the
user
to
a
cloud
service
of
any
kind
it's
processed
locally,
and
hopefully
we
will
manage
to
make
this
largely
automated
first.
This
is
I
mean
this
is
a
provocative
thing.
I'm
aware
of
that
is
tricky
to
have
a
device
have
things
blocking
automatically,
but
maybe
if
the
profiles
are
mature
enough,
you
can
achieve
this
goal
as
well.
C
M
At
the
moment
that
the
spirit
project
is
both,
it
is
both
a
concept
and
running
coat.
We
decided
to
write
start
writing
running
code
from
the
start,
so
we
have
at
the
moment
of
prototype,
running
it's
based
on
open
wrt,
and
it
is
also
currently
bundled
with
our
belly
box
device.
That
is
an
earlier
project
of
a
setting
and
labs.
M
It
is
this
piece
of
hardware
on
the
left:
it's
a
GL
I
net,
small,
cheap
kind
of
home,
looser
device
and
with
the
family
box
project,
we
implemented
a
validating
resolver
on
that
thing,
with
trust,
anchor
management
etc,
and
we
took
that
project
to
an
able
spin
on
it.
So
we
have
announced
the
value
box
after
it's
been
functionality,
and
one
of
our
architectural
principles
is
to
focus
on
IOT
devices
with
a
what
we
call
a
predictable
behavior.
So
we
clearly
distinguish
between
a
home
computer,
a
PC
or
a
laptop,
for
example.
M
Tablet,
because
the
behavior
of
such
a
device
is
pretty
unpredictable,
I
mean
my
behavior
at
my
laptop
is
totally
different
from
the
one
of
my
wife,
for
example.
I
do
weird
stuff.
She
just
visits
a
nice
web
sites,
but
there
is
also
a
privacy
issue
info
there,
because,
if
I
would
be
the
manager
of
the
spin
device
in
my
home
I
wouldn't
even
want
to
know
what
my
life
is
due.
So
we
left
that
out
of
the
equation.
M
We
are
focusing
on
IOT
devices
because
IOT
devices
a
regular
IOT
device,
at
least
the
ones
as
we
envision
them.
They
have
a
predictable
behavior
like,
for
example,
the
refrigerator
again
it's
it
may
visit
the
manufacturers
site,
maybe
for
a
firmware
update
of
something,
but
it
may
also
visit
the
grocery
store
to
order
something
if
you
could
notice
that
you're
out
of
milk
I
don't
know,
but
that's
about
it.
It's
not
going
to
scan
the
entire
internet
and
for
22
or
do
all
kinds
of
other
stuff.
M
So
we
envision
that
we
can
define
templates
for
these
particular
devices.
That's
why
we
focus
on
on
them.
So,
as
you
can
see,
the
computer
traffic
is
directly
forwarded
to
the
Internet
and
you
can
have
your
internet
things,
network
or
internet
things,
devices
let
their
traffic
pass
through.
The
spinning
box
here
is
a
picture
of
an
early
prototype
of
the
visualizer.
M
It's
a
bit
simple,
but
what
it
does
is
in
the
center
is
the
IOT
device.
In
this
case
it's
a
Samsung
Smart
TV
of
one
of
my
colleagues
and
as
you
can
see,
it
already
makes
a
lot
of
connections
if
you
turn
it
on
to
the
Internet
and
in
the
very
first
prototype
these
where
IP
addresses,
so
it
showed
the
IP
addresses
it
was
connecting
to,
of
course,
that's
not
very
useful
for
the
efforts
user.
M
So
we
build
a
kernel
module
in
the
spin
box
that
monitors
DNS
queries
and
when
it
can
correlate
a
DNS
query
to
one
of
the
IP
addresses
it
connects
to,
and
it
shows
the
name
rather
than
the
IP
address,
and
we
notice,
for
example,
that
this
Smart
TV
was
connecting
to
Facebook.
Now
mycolic
has
nothing
against
Facebook,
but
he
didn't
configure
a
Facebook
account
on
that
smart.
Even
so,
he
was
surprised
to
see,
but
his
smart
miss
Marchant.
He
was
connecting
to
Facebook.
M
The
next
thing
you
can
do
is
you
can
click
on
the
balloon
of
showing
Facebook
and
you
can
tell
the
spin
device
to
block
traffic
to
Facebook.
That's
also
currently
working
in
the
current
program.
Now
we
have
this
little
IOT
lap
within
SIEM
we're
testing
various
devices,
smart
speakers
and
most
Amazon
echo,
for
example,
and
we
have
noticed
that
a
few
starts
walking
to
empty
and
to
adjust
a
key.
Then
of
course,
at
some
point
in
time
the
device
will
stop
working.
M
M
Speaking
of
current
status,
as
I
said,
we
have
this
running
prototype
on
our
money
box,
open
wot
platform.
We
started
with
a
focus
on
privacy.
We
hope
to
extend
that
to
security
related
things
likely.
We
are
hoping
to
scan
devices
within
the
network,
for
example,
to
see
if
they
have
open
ports
or
whatever
our
philosophy
was
to
design
a
vertical
slice,
that's
kind
of
narrow
kind
of
functionality
that
works
top
to
bottom,
and
once
you
extend
that
functionality
it
becomes
broader.
So
we
have
more
modulus
as
time
progresses.
M
Being
able
to
detect
malicious
traffic
within
the
network,
anomalies,
etc,
the
software
is
free,
so
you
can
have
a
look
at
how
we
progress
a
target
upside.
If
you
happen
to
have
the
same
hardware
that
we
use
then
also
install
firmware
images
straight
from
the
valley
box.
Websites
so
feel
free
to
have
a
look
at
that.
But
remember
it's
all
been
through
research.
So
don't
expect.
M
The
vision
for
the
further
future
is,
it
would
be
great
if
we
would
get
the
spin
concept
and
maybe
even
spin
software
deployed
into
regular
home
devices.
We
are
aware
of
some
similar
initiatives.
Apparently
the
industry
has
picked
this
up
as
well,
but
this
is
all
proprietary
stuff
and
it
is
cloud
based
mostly
and
we
hope
to
come
up
with
some
kind
of
an
open
standard
that
vendors
might
ultimately
implement
in
their
home
Reuters.
M
We
foresee
also
some
standardization
work
in
regard
to
this.
We
foresee
that,
for
example,
there
should
be
interoperability
amongst
into
the
things
devices.
I
can
imagine
that
if
you
plug
in
an
Internet
of
Things
device
in
your
network,
its
identifies
itself
in
a
secure
manner
to
maybe
do
router
so
that
the
roots
are
can
download
certain
templates
belonging
to
that
device.
M
So
we
foresee
a
standardization
work
in
that
area
concerning
protocols,
data
formats
and
for
the
near
future.
We
are
working
on
refinements
and
improvements
of
the
spinning
software.
We
also
have
one
huge
research
question,
and
that
is
how
to
protect
the
protector,
how
to
make
the
spin
device
safe
itself
so
challenging
again,
we
have
to
initiate
this
elaborate
collaboration
effort,
it
best
form
for
sharing
device
information
in
a
standardized
way.
That's
interesting,
I
came
across
a
draft
that
seems
to
touch
this
matter.
M
It's
the
draft
ITF
pops
aw
nut
that
more
or
less
resembles
the
things
we're.
Thinking
of
so
that's
interesting
that
there
is
already
some
ongoing
work
happening
in
that
area
as
well
and
hopefully,
ultimately
I
can
tell
my
TV
to
only
go
to
Netflix
and
not
nowhere
else
to
Facebook.
In
my
case,
if
you
are
interested
in
celebration,
please
feel
free
contact
me
I'm
here
all
week
we
have
also
written
a
tech
paper
about
this.
There's
a
lot
more
elaborate
than
what
I
can
share
in
15
minutes.
M
N
Dharam
ashkani
unaffiliated,
you
mentioned
the
standardization
and
I
just
wanted
to
point
to
me.
Maybe
you
are
familiar
already,
but
there
is
a
working
group
called
second
mission
and
continuous
monitoring.
They
truly
does
something
like
that:
eating
in
architecture
and
the
data
model
for
profiling,
security
pattern
or
normal.
N
O
Tim
carreño
yeah
I
will
say
that
you
know
this
is
actually
great
stuff
because
you
know
search
organizations
like
we
brought
in
forum.
We
actually
were
developing
a
new
protocol.
You
talked
about
you
know,
there's
in
translation,
going
on
it's
a
new
protocol
called
Universal
Services
Platform,
which
actually
has
a
Patrol
or
an
agent.
It's
focused
in
the
home
area
network,
and
so
you
know
your
ear
affect
your.
Your
device
is
actually
would
be
a
controller
there.
They've
got
the
discovery
of
vocational
and
stuff
are
taken.
Care
of.
O
The
other
thing
was:
is
that
an
L
map
right?
You
know
if
you
look
at
the
diagnostic
piece
within
these
devices,
whether
it's
an
IOT
device,
rather
the
router-
that
that's
also
a
possibility
because
really
looking
at
a
diagnostic
test.
This
is
this
is
actually
great
work
because
we're
we
are
very
interested.
Thickly
I
was
brought
to
inform
how
we,
how
do
we
troubleshoot,
diagnose
and
react
towards
the
events
that
happen
at
home,
so
we're
looking
we're
thinking
along
those
things
up
a
line,
so
you
know
I'll
be
interested
talk
to
me.
Oh.
M
B
B
E
E
Paths
where
we
have
this
passive
sample
measurement,
that's
which
is
currently
broken,
progress,
send
it,
and
maybe
you
get
some
and
here
or
if
you
can
tell
us
some
pointers
or
feedback
on
it-
will
present
some
first
results
and
any
confusion
and
so
briefly
Ottoman
project.
The
project
goal
is
automated
performance
monitoring,
so
bringing
more
automation
in
the
whole
performance
measurement
and
monitoring
things
it's
funded
by
the
German
government
and
it's
a
program
for
small
and
medium
enterprises.
E
We
are
one
of
those
timeframe
is
2016
until
2019,
so
we
are
at
the
beginning
and
you
can
visit
the
website.
If
you
like
now
to
get
a
better
impression
of
the
project,
it's
maybe
makes
sense
to
tell
you
or
the
partners.
We
have
some
application
partners
which
are
directly
interested
in
the
results
we
have
and
they
provide
problem
statements
use
cases
scenarios
they
have
unless
we
can
get
the
network
data.
These
are
legible
in
IBM
and
we
have
research
partners
which
university
and
to
enterprises
so
Japan.
E
So
it's
a
service
provider
for
the
German
railway
and
autoblog
logistics,
so
everything
which
is
connected
to
the
Japan
and
then
there's
IBM
and
it's
IBM
Network
one
services
they
provide
connectivity
and
services
to
Airlines
of
enterprises
formerly
belonged
to
the
transit
systems.
Now
it's
basically
if
they
use
the
platform
and
solid
wire,
IBM
and
they're,
not
part
of
IBM,
then
there's
a.
E
Column,
then,
for
small
medium
enterprises,
it's
acid
ice
on
it.
We
have
a
network
monitoring
solution
called
ISO
flow,
many
Lopez,
but
we
do
all
things
like
loves,
kale
and
the
rice
network
measurement
and
monitoring,
and
we
also
have
a
second
department
which
does
the
very
very
knowledge
based
company
in
Munich,
and
we
have
our
customers
in
Germany.
There's,
not
a
small
company.
It's
sensitive
medium
is
from
preschool
and
they
have
something
which
is
for
data
exploration
that
just
expire
and
get
it
for
Tyra
in
entire
country.
E
Go
to
the
news,
content
management
system,
so
they're,
really
use
of
century
and
UI
based.
So
it's
24
weeks
in
the
consortium
to
have
either
two
companies
here
or
for
research
completes
for
application.
They
try
to
put
together
and
I
guess.
Ok.
So
what
is
the
idea
of
project?
Why
we
think
we
need
more
to
make
today's
challenges
in
a
circulatory?
Is
that
network
infrastructure
is
business
critical
part
it
becomes
more
and
more
business
critical
that
we
in
enterprise
networks.
E
So
if
there's
something
program
and
production
stops,
yeah
or
whatever
units
you
can
do
with
your
bank
transfers
whatever
and
at
the
same
time
you
have
fewer
and
fewer
people
operating
increasingly
large
networks.
So
men
power
is
declining,
but
you
still
have
to
be
able
operate,
large
networks
and,
additionally,
you
have
high
dynamics
in
networks
to
to
authorization
like
Sdn
and
all
of
automation
things.
So
the
network
becomes
more
and
more
moving
part.
E
So
this
makes
the
automation
of
network
monitoring
mandatory,
because
the
network
rules
and
is
automated,
and
so
as
the
network
monitoring
2ps
well,
of
course,
there's
always
the
discussion.
Thus,
is
it
only
monitoring
so
that
we've
stopped
at
monitoring,
always
also
the
configuration
of
the
network?
That's
also
discussion
with
our
Asian
partners
and
with
our
customers
in
terms
of
liability
and
what,
if
the
automation
breaks
the
network,
because
your
monitoring
system
sets,
we
want
to
have
to
change
Network
who
is
liable
for
that
trust,
something
which
is
in
discussion.
C
E
Initially,
you
configure
the
router,
then
you
have
to
manually
interactively
to
the
problem
analysis
you
to
the
drill
down
to
through
several
charts
and
entire
crimes,
and
in
the
end,
you
have
an
analyst
report,
compilation,
compilation,
but
this
is
typically
done
manual
by
the
operations
people.
Then
you
link
it
with
other
ticket
from
the
ticket
data
and
in
the
end
you
have
the
management
reposes.
Your
management
tells
you
well,
this
application
broke
down.
E
What's
the
reason
for
it,
you
have
to
to
lots
of
these
things
manually
today
in
order
to
end
up
with
the
solution
and
the
report
and
the
autumn
on
in
autumn
on,
we
want
to
automate
the
whole
process,
so
we
want
to
start
a
configuration
of
the
network
infrastructure
and
the
network
monitoring
system.
We
do
the
problem
analysis
mostly
automatically,
and
we
can
then
do
adaptive
adjustments
in
case
we
need
more
fine-grained
data
from
the
system
or
from
from
routers.
E
We
are
aiming
towards
the
traceability
in
visualization
so
that
you
already
get
the
result
presented
that
you
would
otherwise
to
manually
by
by
drilldowns,
linking
with
other
data.
If
you
get
some
insights
there
as
basically
expert
in
the
loop,
the
user,
which
can
feedback
to
this
one
and
in
the
end
we
have
the
automatic
generation
of
the
management
report.
E
So
this
is
the
vision
of
the
complete
broad
project
and
we
have
now
one
specific
case
where
we
get
a
little
bit
more
down-to-earth
and
the
problem
of
unobserved
parts.
So
this
problem
just
came
up
in
a
meeting.
We
discussed
some
approach
and
this
is,
for
example,
the
used
to
the
scenario
deutsche
bahn.
You
have
a
van
which
is
monitored.
You
have
active
measurements
there
like
see
so
at
core
IP
SLA
to
all
locations.
E
You
continuously
know
what's
going
on
there,
but
then
you
have
the
access
network
out
there
and
you
don't
know
what's
going
on
there
and
if
an
application
breaks
or
is
slow
and
the
issue
is
somewhere
over
there,
you
may
not
know
so.
You
have
an
observed
path
and
you
even
don't
know
at
any
time
which
client
is
over
there,
and
this
could
be.
For
example,
hi
queuing
delays
exist,
network
issues,
hi
processing,
delay
in
the
routers,
because
they
are
mostly
software
based
here
or
there
can
be
any
error
that
you
cannot
imagine
yeah.
E
So
this
is
why
we
measure
we
measure,
because
we
want
to
see
the
performance
of
the
network
and
that's
why
we
think
it's
important
to
look
into
this
and
observed
path
and
the
idea
we
got
is
we
could
sample
packets
in
here
and
we
can
do
some
times
them
analysis
from
the
traffic.
This
goes
into
this
direction,
and
this
came
up
when
we
discussed
the
paper
on
school
based.
Sibling
detection
based
on
TCP
time
stems
from
Technical
University
Munich.
I
E
E
Then
we
trigger
further
automated
investigation
by
our
ottoman
mechanisms
and
if
it's
in
a
good
condition
and
application
users
complained-
and
you
know
nothing
changed
and
you're
still
safe,
and
this
network
is
still
the
same
as
before,
due
to
measurement
and
now
the
research
question
is
how
well
can
we
passively
message
it
off
and
delay
increases
in
those
large
scale,
delay
variations
and
that's
our
current
work
and
I
will
hand
over
to
Sebastian
to
explain
it.
Thank.
P
You
ok
so
in
the
following
I'm
going
to
present
our
technical
approach
to
a
little
bit
more
in
detail,
and
in
particular,
how
do
we
do
this
passages
passive
monitoring
in
principle?
Well,
as
you
often
indicated
before,
we
rely
on
time
stamp
information
now
what
time
stamps
are
typically
available
to
us.
P
So
this
is
the
first
timing,
information
we
get
the
time
of
the
host
system
or
the
server.
The
second
time
information
which
we
collect
is
the
time
at
which
we
do
packet
sampling
on
the
router.
So
what
we
do
is
we
take
every
hundred
thousand
packets
and
we
take
a
snapshot
of
its
payload,
including
the
time
stand
option.
P
We
also
know
what
time
was
it
from
the
perspective
of
the
router
when
we
took
the
sample
and
then
we
try
to
establish
a
relation
between
these
two
times
times
and
based
on
this
equation,
which
I'm
going
to
show
you
on
the
next
slide.
We
are
attempting
to
detect
delay
variations,
of
course,
with
this
approach.
P
But
our
basic
assumption
is
like
this.
The
first
step
will
assume
that
clocks,
they
don't
jump,
they
don't
drift.
So
we
live
in
a
perfect
world
and
we
can
establish
a
linear
relation
between
the
timestamps
of
the
host
and
the
timestamp
of
the
router.
So
we
assume
that
we
can
take
the
TCP
timestamp
of
our
measurement
sample
related
to
the
packet
timestamp
sample
in
the
router,
and
do
this
for
several
samples,
and
we
will
end
up
with
some
sort
of
a
linear
equation.
P
Like
times
them
of
TCP
equals
some
slope,
M
multiplied
with
the
packet
sampling
times
them
plus
some
option.
Now,
of
course,
there
are
two
unknown
variables
in
here:
the
slope
and
the
offset
and
I'm
going
to
show
you
in
the
following
how
we
try
to
gain
some
insight
into
the
correct
values
of
these.
For
the
slope,
we
get
a
very
simple
approach.
We
did
appliance
analysis,
so
we
collected
packet
samples
of
our
TCP
flows
and
in
a
after,
in
second
step,
we
analyzed.
P
How
much
is
the
Delta
of
the
TCP
x
times
compared
to
the
Delta
of
the
packet
sampling
timestamps,
and
we
did
this
for
each
consecutive
packet
sample
which
we
got
now.
Of
course,
we
are
aware
that
there
is
probably
a
lot
of
noise
on
the
packet
sampling
times
then,
because
we
will
measure
the
delay,
jitter
processing
delay
in
that,
but
still
our
results
showed
there
are
distinct
peaks
for
different
values
of
M
and
we
got
two
insights
first
of
all.
P
Well,
even
with
this
noise,
there
is
a
way
to
the
guessed
the
most
slow,
at
least
if
your
sample
size
is
large
enough
and
second
of
all,
there
are
two
typical
Peaks
for
Windows
host.
This
peak
is
at
0.25
and
for
Linux
host
is
1,
but
in
the
end
this
means
that
we
have
to
do
this
slope
information
estimation
I'm,
sorry
for
every
flow
which
we
observe.
Now
with
this
knowledge,
we
still
missed
the
second
variable
the
offset
beam,
and
we
try
to
estimate
the
value
for
it
like
this.
P
P
We
use
this
B
value
and
try
to
calculate
what
time
would
we
expect
to
at
what
time
we
expect
to
observe
a
particular
TCP
time
step
now,
if
we
should
be
wrong
with
our
estimation,
for
instance,
we
see
that
the
measured
times
them
for
particular
TCP
is
x,
then
actually
arrived
much
sooner
than
predicted.
This
means
that
the
sample
which
we
currently
process
probably
expected
less
processing
or
queuing
delay
or
whatever,
and
this
means
that
our
current
estimation
for
B
was
wrong
and
we
have
to
adjust
it.
P
P
So
now
we
have
suitable
values
for
the
slope
and
the
offset,
and
we
can
start
to
actually
calculate
the
delay,
variation
and
I'm
going
to
show
you
in
the
two
last
slides
some
preliminary
results,
as
I
said
before,
we
did
offline
processing,
but
we
did
it
with
real
traffic
in
our
intranet,
so
we
have
local
area
network
as
well
as
wide
area
network.
In
this,
however,
we
don't
have
any
well
known
as
traffic.
P
We
don't
have
any
one
on
delay
or
jitter
and
but
still
we
see
some
interesting
results,
even
if
we
don't
have
any
one,
let's
say
lab
setup
conditions.
In
the
first
picture,
you
see
a
flow
which
has
a
duration
of
about
2
or
3
minutes
and,
as
we
can
see
here,
the
calculated
delay
variation
is
typically
around
1
to
5
milliseconds
and
the
first
glance
there
are
no
outliers
at
all,
and
so
we
assume
that
this
was
local
area
network
traffic,
that
our
measurement
accuracy
is
probably
around
somewhere
between
1
and
5
milliseconds.
So.
P
The
last
result
slide.
This
is
a
flow
which
was
measured
for
a
IP
address
destination
in
the
internet,
so
this
is
wide
area
network
traffic
and
it
was
quite
a
long
flow.
We
see
here
that
this
is
going
on
for
12
hours
and
we
see
here
an
interesting
sawtooth
pattern.
We
don't
see
this
sawtooth
pattern
for
every
flow
which
we
observed
in
our
measurements,
but
only
for
sound,
and
hence
we
assume
that
this
is
probably
an
issue
with
the
clock
on
the
host
which
drifts
and
eventually
it's
corrected
again.
P
Now,
how
does
this
fit
into
the
big
picture?
What
does
delay
variation
mean?
Does
this
boil
down
to
a
most
valued?
No,
can
you
make
any
statements
about
voice
quality
with
just
as
well?
You
know
again
it's
useful
for
us
as
a
first
indicator
for
anomalies
in
the
network,
so
if
we
see
that
the
delay
variation
is
typically
around
5
milliseconds
and
then
at
some
point
we
see
some
weird
Peaks
which
can
be
done
with
this
passive
approach.
P
This
is
an
input
for
Ottoman
control
loop
to
trigger
additional
and
more
fine-grained
monitoring,
as
we
intend
to
do
in
the
Ottoman
project.
Ok,
so
it
jumped
to
the
last
slide.
So,
as
we
have
seen
the
positive
the
passive
monitoring
of
the
day,
variations
using
TCP
timestamps
seems
feasible,
at
least
for
the
scenarios
which
we
look
at,
and
we
assume
that
the
assumption
that
the
clock
drift
is
like
little
didn't
old.
P
However,
the
timestamp
accuracy
of
routers
has
improved
a
lot
so
actually
now
it's
feasible
to
do
measurement
in
the
order
of
magnitude
of
small
number
of
milliseconds.
Our
future
work
includes
that
we
set
up
a
task
lab
where
we
have
delay
jitter
and
everything
under
control,
so
that
we
can
have
a
further
look
at
how
accurate
we
actually
can
measure
and,
second
of
all,
that
we
do
more
measurements
in
the
network
of
our
application
partner,
in
particular
document
systems,
the
stuff
at
least.
P
P
I
H
The
idea
editors
queue
it's
about
IP
version,
6
performance
and
time,
Gnostic,
metrics
destination
offices,
so
they
have
a
destination
destination
of
extension,
header
field
in
IP
version
6,
where
they
exactly
put
these
times
them
sequence,
numbers
and
so
on,
nan
the
track
or
for
conveying.
This
is.
P
Q
Westfair
to
Curry
University
of
Southern
California,
a
fantastic
work,
so
things
for
bringing
is
interesting
to
look
at
the
data.
Q
Have
you
looked
at
if
you
considered
the
effects
of
mobile
devices,
as
they
change
between,
say,
different
access
points,
and
things
like
that
which
were
distance
from
the
access
language
might
cause
the
saw
tooth
pattern
or
other
stream
gene
artifacts?
This
little
Bismarck
work,
though
I,
don't
necessarily
mean
cell
phone
network
even.
P
We
should
see
some
mobile
devices
in
original
because
well,
of
course,
everybody
fastest
in
smartphone,
but
indeed
I
didn't
have
a
look
at
the
mobile
so
far,
so
the
results
which
we
have
a
look
at
word
just
business
okay.
So
these
these
effects
complete
what
interesting,
but
we
didn't
have
yes
and
I
think
also
I,
don't
know.
P
E
Q
I
Q
Data
sets
into
sort
of
notifications
no
issues
with
too
many
false
positives,
and
things
like
that
with
trying
to
actually
detect
real
problems
using
the
data
to
come
up
with
on
you,
you
come
up
with
a
whole
bunch
of
interesting
data
sets.
They
have
talked
about
how
to
turn
them
into
something
useful.
I
B
B
K
I
K
So,
let's
dive
into
it
so
a
bit
of
context.
First,
so
BGP
has
been
there
for
a
few
decades
now
so
I
guess
most
of
you
know
how
it
works.
To
summarize,
it
provides
kind
of
a
back
end,
complex
environment,
for
example.
You
can
have
limited
visibility
of
the
topology.
You
have
limited
portion
of
information
to
you,
the
best
path,
selection
and,
of
course,
you
have
questions
of
legitimacy
and
integrity.
I
K
That's
why
in
12
2015
7
10
years,
Theo
PhD
thesis
including
mine
on
subject,
so
the
main
idea
behind
this
work
come
from
the
fact
that
AAS
relationships
I
made
by
business
agreements,
which
one
negotiated
few
times
a
year,
and
this
also
inferred
the
idea
that
the
inter
domain
structure
must
be
stable,
at
least
on
a
large
or
large
part
about
the
methodology.
So
a
bit
of
overview.
K
And
synoptic
of
the
process
of
BGP
dynamics
and
Rises,
so
first
definition
of
primary
path.
We
basically
want
to
feel
referential
of
a
stable
structure
of
BGP,
so
the
internet,
and
so
we
build
a
referential
of
what
we
call
primary
path,
which
is
the
most
useful
for
each
router,
perfect
spare.
And
from
this
we
want
to
interpret
the
flow
of
incoming
updates
messages
into
primary
vas
and
availability
so
to
compare
it
to
the
primary
bus
referential
that
revealed
since
the
Soto
event
is
a
primary
pass
in
availability
period.
K
So
is
quick
overview
on
the
synoptic
of
BGP
damning
analyzes.
So
this
is
where
we
take
place
and
at
an
extent
we
want
to
after
that,
be
able,
allows
these
VP
dynamic,
analyzes
to
extract
and
detect
events
and
be
able
to
mitigate
them,
eventually,
a
bit
sorry
a
bit
more
precision
on
what
is
episode
of
event.
So,
basically,
as
I
said,
we're
transforming
the
the
stream
of
updates
into
a
stream
of
set
of
events
which
basically,
groups
updates
together
messages
together
following
a
primary
pass
for
so
yeah.
K
Go
back
the
lose
the
primary
pass
and
then,
after
some
past
expression,
phenomenon
go
back
on
to
the
primary.
But
again
we
expect
to
see
those
kind
of
events
fight,
often
and
with
with
a
lot
of
with
short
duration.
Sorry
also,
you
expect
to
see
from
those
kind
of
events
to
see
structural,
so
that
ends
with
basically
our
survey.
Events,
where
you
switch
from
your
primary
paths
that
you
have
in
referential
to
a
new
primary
path,
which
can
be
due
to
a
change
in
policy
business
policies,
for
example,
also
yeah.
K
This
is
the
two
types
of
event
that
we
will
see
and
we
want
to
take
a
look
at
the
term
Poland
artistics
of
those
events
to
analyze
BGP
dynamics.
So,
let's
take
a
look.
Look
at
the
first
result,
so
for
this
work
we
took
data
from
one
monitor
from
the
right
right
price
project
on
the
three
months
very
good,
so
that
is,
that
is
pretty
really
relevant,
because
you
had
several
vantage
points
but
for
free
to
work.
We
intend
to
extend
those
monitoring
forms.
K
So,
first,
what
we're
going
to
look
at
is
the
first
thing
we
look
at
is
the
turnover
of
primary
path.
So
basically
it's
entering
the
question
of
stable
our
primary
path
over
long
period
of
times,
so
to
do
that,
we
computed
primary
path,
calculated
on
April
2,
a
16
and
compared
them
to
the
primary
path
that
we
would
find
on
different
months
following
this
one.
So
you
can
see
that
primary
paths
are
pretty
stable
over
time.
K
So
after
one
year,
you
almost
have
still
50%
of
primary
path,
still
the
primary
path
that
you
sign
one
year
later,
but
it
does
definitely,
it
is
definitely
a
decreasing
over
time.
So
you
need
to
identify
the
structural
changes
and
to
incorporate
them
into
your
referential
and
be
able
to
update
this
referential
periodically
to
have
a
meaningful
view
of
the
stability.
K
Second
thing
we
look
at
is
the
prevalence
of
primary
path,
so
do
they
exist
and
how
are
they
stable?
Okay,
so
we
have
DCF
figure
on
so
the
complementary
CDF
of
the
primary
path
Sujit
for
all
pairs
at
the
data
set
on
the
three-month
period.
So
you
can
see
that
first
ipv6
is
a
bit
more
stable
than
ipv4.
K
K
K
So
second
thing
we'll
look
at
is
take
a
closer
look
on
primary
on
pseudo
evidence,
so
this
confirms
our
first
assumption,
so
you
can
see
that
you
have
almost
observe
the
events
that
we
find
our
event
that
come
back
to
the
primary
pass
that
we
calculated
from
the
referential,
and
you
have
a
few
of
structure
of
today
that
we
wonder
on
the
top
x-axis.
You
can
find
more
friendlier
human
friendly
units
than
seconds
right.
K
You
can
see
that
for
transient
to
the
events
they
basically
have
short
durations,
most
of
them,
so
under
a
minute
for
about
50
of
ip4,
only
12.9%
are
longer
than
one
hour
and
for
ipv6
it's
almost
a
new
trend
about
the
structure
of
the
events.
You
can
see
that
they,
on
the
contrary,
have
way
longer
duration.
K
K
So
how
do
we
benefit
from
those
of
the
events
and
how
do
they
help
us
to
analyze
bgp
dynamics?
So
you
can
see
that
first,
you
gain
with
those
new
two
events
objects.
You
gain
a
factor
in
term
of
volume
again
about
almost
one
order
of
magnitude
for
ipv4
and
ipv6,
which
mainly
comes
from
the
fact
that
remember
the
soul.
Events
include
all
upset
during
the
past
expression.
Following
a
from
a
past
fall,
you
come
from
the
fact
that
you
group
up
there
together
and
you
can
see
that
for
a
lot
of
transients
11.
K
At
least
you
have
only
one
path
that
you
switch
on
the
past
expression
before
you
go
back
again
onto
the
primary
path
and
about
most
of
them
are
at
maximum
sight,
but
from
the
bass
exploration
in
the
trends
of
my
p4
and
ipv6.
The
Subang
benefit
that
we
are
from
this
is
that
it
is
semantically
rich.
So
to
do
that,
so
the
tables
are
beeping
to
get
it
to
understand
it.
First,
so
I'm
gonna
explain
them.
So
we
compared
a
BGP
mounted
net,
which
is
a
public
service
that
we
do
I
Jack
sand.
K
Any
kind
of
event
detection
on
BGP,
so
we
wanted
to
compare
our
methodology
and
apply
it
on
a
use
case
to
detect
outages
in
Ajax
compared
to
the
BGP
man.
So
you
can
find
the
reports
on
BGP
stream,
but
coming
down
there
public
website.
So
first
on
the
outages
type
of
events,
so
out
of
all
observable
event
that
we
could
see
no
data
set,
meaning
that
we
have
at
least
a
one
update
related
to
the
prefix
and
that
they
say
is
impacted
by
the
T
outage
we
can
for
most
of
them.
K
They
take
them
at
the
same
time.
So,
basically,
what
we
look
at
is:
do
we
have
a
super
event
corresponding
to
a
starting
time
announced
by
BGP
money
that
translates
into
observe
even
domain.
So
the
answer
is
yes
for
most
of
them,
and
even
for
a
non-negligible
part
about
fifty
percent,
we
can
detect
in
the
eleven
domain
we
detect
them
before
than
they
do
so.
We
said
there
is
observing
that
translated
primary
person,
availability
before
them
and
actually
out
of
those
236
early
detected
events.
K
There
is
a
significant
detection
phase
about
one
hour
before
what
they
announce
or
about
90
percent
have
done
nothing,
and
only
for
about
one
per
nine
percent
of
the
events.
We
didn't
find
any
event
related
to
this
outage.
Another
type
of
events
that
the
report
is
our
Ajax
or
just
which,
most
by
Jack's
multiple
origin,
si
Jax.
So
you
just
want
to
look
at
the
origin.
Yes,
I,
don't
think
yeah
see
I
Jack.
So
out
of
the
observable
event,
we
can
say
so.
K
The
table
may
be
a
bit
confusing
at
first,
because
we
appears
to
be
missing
a
large
part.
But
actually,
when
we
manually
inspected
the
results,
it
seemed
that
there
was
only
due
to
one
prefix,
which
was
illegitimate,
less
specific
graphics,
which
was
in
Egypt
similarly
announced,
which
triggered
a
lot
of
hijack
events
because
of
all
more
specific
announcements.
So
explicitly.
K
This
might
mean
that
we
found
in
the
when
we
compared
to
the
in
the
so
driven
domain
and
the
primary
pass
referential
when
we
compared
that
we
saw
that
there
was
a
primary
bus
for
this
specific
event
and
that
for
us,
the
I
Jack
was
not
a
hijack,
because
the
original
yes
was
the
one
we
had
in
calculated
in
the
primary
bus
referential
and
for
those
events,
inclusive
legitimate
means
that
for
is
perfect,
less
legitimate.
The
less
specific.
Yes,
sorry
good,
find
events.
K
Yeah
implicit,
illegitimate
means
that
we
could
find
an
event
due
to
a
less
specific
prefix.
So
for
us
we
didn't
find
any
primary
well
CSV.
We
didn't
find
any
primary
pass
for
this.
Those
events,
which
means
that
we
cannot
say
that
you
see
the
I
Jack,
because
we
don't
have
any
primary
pass
to
this
value
right.
K
Are
they
what
are
their
properties
really
and
also
to
be
able
to
remove
all
the
bgp
noise,
so
the
recurrent
there
so
the
events
that
receive,
for
example,
the
transient
so
the
events
to
be
able
to
remove
all
that
noise
so
and
if
you
have
any
suggestions
on
new
interesting
result
like
any
good,
you
know
I,
guess
that
will
conclude
my
talk.
Thank
you
for
your
attention
and
if
you
have
any
questions
or
remarks,
you.
P
L
Yeah
nice
presentation,
I
just
have
a
question
Giovanni
Italian.
By
the
way
there
some
people
use
IP
anycast,
which
just
allows
you
to
announce
the
same
perfects
from
multiple
equations
across
the
globe.
Yes,
and
some
people
use
the
same
eternal
system,
some
people
use
different
autonomous
system.
So
how.
K
Is
true,
so
actually
we
want
to
refine
a
bit
of
a
methodology
to
take
into
account
such
specific
cases,
because
you
can
have
a
lot
of
cases,
we're
really
looking
at
primary
path
from
a
router
and
a
Naas.
So
I
wrote
her
below
me
Rheneas
a
prefix,
so
indeed
it
those
kind
of
network
practice
and
in
fact-
and
we
gonna
look
into
that
to
make
sure
you.
K
K
Engineering
practices
that
can
impact
as
well,
because
we
look
really
the
primary
path,
is
the
entire
path
that
we
look
at.
Not
so
you,
if
you
have
one
new
AAS
or
one
less
a
yes,
so
as
fast,
you
can
add
a
mismatch
of
course.
So
we
want
to
refine
this
methodology
to
include
such
specifications
and
our
more
precise
and
defined
definition
of
the
yes.
If
people.