►
From YouTube: IETF100-NFVRG-20171114-1330
Description
NFVRG meeting session at IETF100
2017/11/14 1330
https://datatracker.ietf.org/meeting/100/proceedings/
B
C
B
Okay,
it's
about
time
to
start
welcome.
This
is
any
veerji
and
we
today
have
a
meeting
which
we
have
basically
three
presentations
from
different
research
activities
as
usual.
This
is
a
reminder
of
the
IPR
policy
and
the
equivalent
to
what
is
a
note
well
in
the
ATF
and,
as
usual,
is
a
request
for
you
to
be
active.
B
Okay,
at
least
the
lady
to
my
left,
that
is
Sarah
banks
that
is
normally.
This
is
our
secretary
in
his
over
perfect
note
taken
and
is
acting
as
a
co-chair
today
and
well
again.
This
is
something
that
is
a
notice
that
we
always
show
precisely
about
the
research
related
events.
You
know
that
we,
you
can
use
the
list
to
to
make
announcements.
B
And
well
the
areas
that
you
have
somewhat
to
accept
to
some
self
control
on
what
you
announce-
and
this
is
something
that,
in
case
of
doubt,
we're
really
happy
to.
We
will
be
happy
to
help
and
well
that's
a
few
news
and
a
few
updates
on
what
how
things
are
going
on
inside
the
NEPA
G.
First
of
all
is
that
the
gaps
raft
is
close
to
go
for
for
irst
Paul.
We
found
actually,
we
found
four
reviewers
we
needed
and
we
found
four
reviewers
that
review
the
documents
almost
before
coming
here.
B
I
have
been
acting
as
a
shepherd
of
the
of
the
documents
and
almost
all
comments,
but
to
that
we
are
more
worth
a
little
bit
of
discussion
with
the
reviewer.
Having
addressed
very
likely
the
moment,
we
are
back
from
the
from
this
meeting.
I
hope
Carlos.
The
gaps
document
will
be
ready
right
when
we're.
D
B
D
B
So
sure
your
name
is
written
down
thinking
you're
on
the
list,
another
another
that
I
think
could
be
is
a
reasonable
majors
as
a
contribution
bad
side
I
doubt
that
it
could
be
a
contribution
of
the
retcon
tuition
too,
and
if
he
had
GES
it
is
because
it
has
a
very
focused
on
limited
scope,
its
own
service,
validation
and
service
verification.
Validation
were
talking
about
service
verification,
I,
think
that
my
own
keys
here,
oh
yeah,
so
this
is
something
I
would
like
to
discuss
with
you
later
on.
Is
that
I
think
there
is
a?
B
B
Finally
well,
this
is
a
list
of
other
activities
that
I
consider
that
are
interesting
in
the
happening.
The
ietf
199.
Sorry
and
well,
do
you
have
the
list?
There
is
a
Jew
know
about.
Well,
ideas
is
about
identity.
Networking
tip
is
on
at
the
station
precisely
an
emoji
network
management.
It
says
there
was
a
interesting
session
this
morning,
Tanner
G
on
Pat,
aware
networking
the
activity
on
star
and
shortly
certificates.
That
I
believe
will
be
essential
for
security
in
nav.
There
is
a
sad
meeting
on
niche
computing
or
whatever
other
idea
you
have
it.
E
B
If
I
remember
well,
let
me
check
a
little
and
I'll
announce
it
during
one
of
the
well.
The
speakers
are
are
changing
and
well.
Finally,
this
is
the
agenda,
as
I
said
before,
the
idea
is
to
have
a
three
introduction
to
research
activities
that
are
related
to
the
goals
that
we
agreed
to
have
in
the
last
meeting
in
Prague.
So
first
one
is
precisely
on
these
real
cute
acting,
which
is
a
new
view
of
the
reality.
B
B
One
is
about
the
inclusion
of
some
mechanisms
for
for
self
adapting
the
systems
and
the
second
is
on
slicing
and
how
you
can
dynamically
create
the
slicing
by
doing
fancy
things
with
with
the
virtual
infrastructure
managers
time
permitting.
We
have
at
the
end,
not
a
Mick
period,
for
whatever
idea
of
discussion
you
want
hope.
So,
let's
start
with
the
first
of
the
presentation,
is
a
check
up
from
the
university
of
the
basque
country.
B
F
This
is
the
Jenna
I
will
take
a
look
and
how
we
understand
an
FB
and
how
we
have
how
we
have
arrived
to
to
this
new
view
on
what
NLP,
with
new
thing
machines
would
be.
We
will
take
a
look
at
the
evolution
of
the
technology
proof
of
concept.
We
remain
inert,
see
some
limitations.
We
have
found
in
the
actual
definition
of
nfe
and
some
pointers
to
to
a
solution
just
go.
This
is
just
if
you
ever
wondered
were
the
university
of
the
basque
country
is
located
spain,
but
it's
in
there.
So
just
go
ahead.
F
If
you
want
some
information
next
well,
if
we
take
a
look
at
NAB
as
a
technology,
we
always
remind
that
it
was.
The
idea
was
replacing
dedicated
women
with
community
computing
power
and
switching
elements,
which
in
fact
meant
that
we
had
a
common
hardware
based
mainly
computing
nodes
and
switches,
and
on
top
of
them
we
deployed
other
pieces
of
solar
that
budahas
brass
or
whatever
we
needed.
F
I
mean
this
meant
that
in
a
certain
way
he
was
pre-configured
for
all,
even
if
later
we
had
as
the
end
as
a
way
to
configure
more
or
less
on
demand
at
the
throw
enough
of
a
network
service
to
be
configured
on
on
demand
with
a
loop.
This
is
the
Dead
Sea
nav
architectural
framework,
and
we
can
see
NFP
log
like
it
was
really
fat.
Visceral
network
was
to
be
deployed
on
over
network
Hardware.
Delta
less
storage
was
to
D
be
deployed
over
the
network,
the
storage
server.
F
It
was
quite
easy
because
in
fact
it
was
always
speaking
about
blocks
or
battalions
in
this
world.
Quite
easy,
crystallizing
computing
was
also
quite
evident,
and
ventilation
network
was
not
so
easy,
but
when
we
had
some
tools
for
for
doing
it
well,
this
had
some
implications.
The
implications
were
that
if
we
were
willing
to
process
packets,
they
needed
to
be
done
on
a
computing
node.
At
a
certain
point,
we
need
to
take
out
the
the
packet
from
the
network,
throw
them
to
the
virtual
machine
or
whatever
we
have
here
and
send
them
back.
F
This
has
one
important
implication
is
that
we
are
not
able
to
treat
the
packets
outside
of
a
data
center
outside
of
our
computing.
Node
is
is
located
well,
so
we
we
start
to
thinking
about
how
it
would
be
possible
to
get
a
more
efficient,
packing
process
processing,
and
here
we
we
found
what
what
really
happens.
F
Well,
there
are
two
competing
approaches
to
to
process
the
packets
more
efficiently.
One
of
them
is
using
general
purpose,
CPU
processes,
which
is
the
original
way
in
which
energy
was
was
thought,
but
we
could
also
speak
about
and
has
a
data
planes
to
make
things
even
more
spicy.
We
can
also
implement
data
planes
in
general
purposes
CPU.
So
we
have
some
kind
of
mixed
approach
here.
F
If
we
take
a
look
at
so
of
the
hardware
evolving
in
a
plane
enemy,
we
have
some
ideas
regarding
a
computing
boxes
and
switches.
Perhaps
we
could
say
that
the
price
for
for
port
is
much
higher
in
computing
boxes,
that
that's
true,
you
cannot
think
about
implemented.
General-Purpose
switch
with
multi
nikkor
computers
will,
because
that
will
be
expensive.
We
also
have
the
idea
that
the
switches
don't
need
very
special.
In
many
cases
place
to
live
in,
that's
something
the
environment.
Our
properties
are
are
more
tolerant
for
switches.
F
We
also
have
the
idea
that
changing
the
the
process
inside
the
switch
is
much
more
difficult
that
in
the
process
also,
these
are
more
or
less.
This
could
be
where
I,
depending
on
which
kind
of
technology
you
deduce
well,
we
are
going
to
take
a
look
at
a
very
high
view.
This
is
not
a
full
review.
I
mean
there
are
things
that
are
missing.
I
could
not
put
every
link,
so
I
have
only
left
some
ones.
F
We
can
start
with
a
sick
manufacturer,
40
departs,
which
man
factor
Broadcom
trillion
to
this
kind
of
basic,
that
you
can
found
in
many
switches
and
you
can
find
more
or
less
the
equivalent
set
of
properties
or
features
in
all
the
factory.
You
have
vendor
specific
Asics.
This
means
that
you
can
have
different
features,
for
example,
custom
pipelines
or
whatever
you
can
find.
The
numerical
procedure
process
based
switches
no
be
flow
with
one
of
those
manufacturer.
F
You
can
have
FPGA
and
you
can
even
have
a
practice
is
based
at
our
end,
at
the
DPD
open
this
we
typically
switch
whatever.
This
also
gives
you
some
some
ideas.
For
example,
in
terms
of
once
you
get
the
list,
you
have
one
property
that
it's
important.
It's
one,
a
limitation.
There
is
not
only
way
to
describe
the
processing
with
you
need
with
the
packet.
Those
approaches
are
not
using
the
same
language
I'm,
not
using
the
same
model.
They
are
totally
disjoint
in
many
of
the
cases.
F
Also,
in
a
certain
way,
adding
features
is
more
and
more
difficulty
more
and
more
easier.
You
get
is
a
well
here,
it's
more
easier
to
add
features
that
if
you
are
here
well,
if
we
go
to
the
last
of
these
of
these
architectures,
we
can
say
that
there
are
many
activities
regarding
giving
the
property
of
a
stateful
processing
having
a
stainless
processing,
and
we
don't
consider
matters.
We
don't
consider.
Groups
means
in
a
certain
world
written
in
a
state.
F
F
Well,
one
thing
that
is
important
is
most
of
them,
except
perhaps
before
is
in
experiment
very
experimental
approach.
These
are
things
that
have
been
treated
but
not
used
very
much.
If
we
take
a
look
at
the
how
the
packet
processing
is
done
on
the
computing
node,
we
found
that
very
important
thing.
One
very
important
thing:
it's
done
with
x86
code
I
mean
we
can
speak
later
about
the
environment.
F
We
can
speculate
about
many
things,
but
we
can
say
that
at
least
there
is
a
lingua
franca,
which
is
that
some
kind
of
code,
which
can
mean
the
body.
Normally
we
are
trying
to
get
an
environment
that
is
a
smaller
that
is
quicker.
That
allows
speed,
processing
and
has
a
reduced
food
memory
footprint
that
that's
very
quick
in
order
to
set
up
an
and
undo
time,
and
we
have
well
developing
the
you
go
from
my
parasol
to
digital
machine
to
containers
to
universes.
Perhaps
POS
terminals
are
one
of
more
or
less
known
as
before.
F
F
All
these
technologies
are
able
to
take
profit
of
low-level
packet
processing
improvements.
There
are
several
technology
that
has
been
that
are
being
developed.
Most
of
them
involves
bypassing
the
tcp/ip
stack
and
many
are
already
being
integrated
in
other
solar
lights,
like
OpenStack,
we
have
internal
processing
with
X
DB,
as
DPS.
Explain
that
up
path.
This
just
to
give
an
idea
of
where
it
gets
the
packet
processing
is
get
the
part.
The
packet
processing
features
by
integrating
the
extender
very
tailored
packet
filters,
which
is
kind
of
code
that
can
be
run
at
terminal
level.
F
We
have
a
user
space
solutions
like
AdMob,
the
PDK
from
Inter
and
Una's.
Now
on,
all
those
approaches
are
more
or
less
integrated
in
more
or
less
all
of
these
one
or
the
others
they
could
be
equivalent
in
some
points,
but
well
we
can
say
that
there
are
improvements
that
are
going
to
benefit
this
dis
approaches.
So
this
means
that
at
a
certain
point,
both
approach,
we
have
some
also
expectations
on
the
cpu
evolution,
it's
more
than
going
for
more
speed
and
more
course,
last
skylight.
F
So
there
are
some
conclusions:
the
the
boundaries
between
computers,
which
are
no
longer
equivalent
to
data
and
packet
management
ability.
The
more
subtle
difference
is
related
to
the
state.
Traditional
switcher
are
stateless,
and
this
is
true,
but
some
new
players
involved
a
stateful
solution
on
both
silicon
and
so
wall
switches.
F
We
think
that
gesture
is
that
there
is
a
place
for
improving
current
architectures
and,
at
let's
take
a
look
at
the
the
proof-of-concept
were
running
in
in
se.
This
idea
is
30-some.
Some
time
ago
we
published
the
paper
and
we
take
a
look
at
how
the
things
were
running
at
a
certain
point.
We
call
as
the
agnostic
MFP
that
kind
of
vnf
in
here,
which
was
deployed
without
taking
into
account
the
underlying
network.
F
F
Passed
to
the
VLF,
which
is
the
stateful
processing
and
the
stainless
processing,
was
in
the
street.
Well,
the
new
space.
For
for
this,
it's
a
proof
of
concept
was
flow.
A
world
network
access
control
in
the
two
first
approaches,
the
all
the
traffic
needs
to
go
to
the
switch
along
the
authentication
data.
This
was
paralyzed
at
1x
you
get
to
the
vnf
and,
depending
on
the
result,
part
of
the
traffic
is
recent
and
sent
to
the
network.
This
means
that
at
a
certain
time,
all
the
traffic
is
processed
in
the
in
the
VLF.
F
If
we
go
to
the
Sdn
enabled
an
LP,
we
send
to
a
residence
which
both
the
traffic
and
authentication
data,
the
Ithaca,
the
authentication
data-
is
studied
and
maintained
in
the
forcing
block
there
is
some
configuration
sent
to
the
SDS,
which,
and
only
part
of
the
data
is
sent
to
the
network.
We
take
a
look
at
how
this
was
implemented.
F
You
know
we
run
the
the
POC
with
Telefonica,
HP,
kinetic
and
upv,
and
we
demonstrated
improve
s.
We
could
be
that
this
was
demonstrated
in
two
places
and
we
were
in
Paris
and
if
you
have
a
representation
of
the
the
name
service,
we
were
using
the
network
service.
We
were
using,
we
have
a
test
user,
which
is
the
vnf
that
generates
the
traffic,
the
vnf,
the
test
service.
It
is
the
the
VLF
that
receives
the
traffic
that
is
going
from
here,
and
you
have
here
the
flow
not
vnf.
We
have
an
enforcement.
F
F
Three
scenarios:
this
is
the
first
one
in
the
first
one
we
have
in
this
one
server.
This
is
another
server.
We
have
the
two
V&F
the
user
under
test.
We
have
here.
The
traffic
that
is
blowing
out
goes
to
the
this
virtual
machine,
along
with
the
authentication
traffic,
and
here
we
decide
which
kind
of
traffic
must
past
and
the
remaining
traffic
is
sent
back
to
the
user.
F
Ok,
so
we
see.
This
is
the
first
approach,
the
second
approach
we
did
use
some
enhanced
placement
awareness
that
meant
that
we
were
able
to
put
the
realtor
machines
on
top
of
specific
places
to
do
a
specific
placement
and
we
could
use,
for
example,
personal
features
unless
our
iov
to
improve
the
the
benefits
and
we
got
very
good
results,
but
we
they
need
to
pass
all
the
traffic
to
the
vector
machine
and
the
third
one.
F
What
we
used
was
you
see
now
the
traffic
only
goes
to
the
OpenFlow
street
and
the
authentication
traffic
goes
to
the
controller
and
the
contrary:
figures
the
switch
to
let
pass
only
the
traffic
that
is
involved.
The
conclusions
you
can
tell
here
these
were
bad
numbers.
We
are
now
very
much
improved.
F
We
also
also
almost
get
the
same
numbers
regarding
the
the
the
speed
of
the
links,
but
the
important
thing
is
not
only
the
speed
of
the
links,
but
is
the
number
of
ports
you
use
underneath
or
the
the
number
of
edges
in
the
service
scenario.
You
don't
use
any
bandwidth,
communicating
the
two
elements,
one
in
the
other.
You
need
it
to
communicate
both
elements
by
links.
F
There
is
also
one
important
thing
here:
you
needed
a
core,
almost
time
reduced
for
doing
the
forwarding,
and
now
this
chorus
is
no
longer
used,
because
this
is
done
by
the
by
the
by
the
switch.
There
is
one
important
thing:
the
authentication
traffic
just
to
get
an
idea
is
around
five
kilobytes
per
real
application
authentication,
and
there
is
one
important
thing
is
that
the
control
function
can
be
topologically.
F
F
Well,
this
is
our
experiment,
and
now
I
would
like
to
highlight
some
conclusions.
We
get
the
limitations.
We
understand
that
when
we
speak
about
Navitus
at
the
end,
I
think
that
NLP
is
usually
in
written
in
bigger
letters
than
the
nest.
The
end,
that's
it's
perhaps
because
the
N
was
incorporated
later
and
the
network
not
was
not
considered
to
be
part
of
what
the
user
could
define.
F
This
is
these
tools
to
several
reason.
Many
times
delegate
in
the
control
of
part
of
the
network
is
not
an
easy
task.
He
not
only
from
the
technical
point
of
view,
but
also
so
the
from
the
administrative
point
of
view.
If
we
take
a
look
at
what
this
really
means,
this
really
means
that
at
the
sole
point
time,
the
as
a
certain
point,
the
danvers
service
which
is
running
in
this
component
is
part
of
the
usual
definition
of
the
product.
But
we
have
an
infrastructure.
We
could
be
the
logical,
switch,
instant,
0
and
lower
instance.
F
Another
limitations
is
that
we
normally
understand
that
the
networks
tend
to
reorganize
itself.
If
we
begin
placing
packet
processing
elements
on
top
of
the
the
network,
we
enter
the
placement
game,
which
is
quite
difficult.
We
if
we
are
using
a
network
because
there's
some
delays
that
you
should
take
into
account
and
the
service
would
be
impacted.
F
Another
of
the
things
other
things
that
the
BNF
d--
unless
the
descriptors
do
not
contemplate
processing
outside
the
computing
node-
and
this
is
also
as
I
already
pointed
out.
There
is
no
problem
with
packet
processing.
There
is
no
lingua
franca
for
defining
packet
processors
over
the
full
set
of
possibilities.
We
have.
F
And
well
the
pointers
to
the
solution.
I
think
this
is
not
a
closed
list,
but
what
I
really
think
is
that
we
should
try
to
to
rewrite
the
question
trying
to
give
to
Sdn.
The
same
importance
were
already
giving
to
the
the
computing
party.
Smart
is
more
than
an
every
proof.
Atlas
Sdn
is
trying
to
really
think
about
both
part
of
the
of
the
question.
In
fact,
that
could
mean
considering
the
the
network
really
becoming
part
of
the
the
software
involved
in
the
network
service
provisioning.
F
You
could
also
implement
advanced
as
lessee
metallic
more
easily,
because
you
could
really
a
splice.
You
could
really
as
lies
at
the
point
you
want.
Just
imagine
that
you
could
ask
for
a
link,
for
example,
to
have
villain
conversion
that
could
have
also
tunneling
suffering
or
even
providing
resiliency
just
in
the
definition
of
the
on
the
network
service
have
a
descent
to
an
Orchestrator
and
Orchestrator
configure
this
not
only
but
wooden
floor
rules,
but
also
deploying
deploying
code
things
well,
the
network
equipment
should
be
able
to
build
allies
it
the
sources
nowadays.
F
I
you
have
found
that
is
more
easy
to
is
easier
to
to
visualize
on
so
wall
switches
that
to
visualize
on
basic
features
is
not
so
easy
to
a
stack,
open
flow
switches
or
open
flow
switches.
On
on
on
real
silicon,
I
think
that
reconceive,
it
is
something
we
have
already
studied
in
the
past,
and
it's
it's
important.
We
really
think
that
we
should
not
think
that
this
is
an
answer.
Platform
awareness
problem
is
more
than
that,
and
we
really
think
that.
Well,
it's
not
an
easy
energy
term
dissolution.
F
F
B
F
G
F
G
F
No,
no!
No!
That
was
not
my
if
I,
if
I
saw
that
it
was
really
limited.
My
idea,
what
I
wasn't
what
I
was
trying
to
say
is:
if
you
consider
filter
machines
or
you
consider
containers
or
you
consider
unit
errors
at
a
certain
point,
there
is
some
similarities
in
the
code
you
can
use,
I
mean
you
can
use
C
whatever,
and
this
you're
pretty
sure
that
this
C
would
compile
more
or
less
the
same,
because
you're
using
the
same
GCC
compiler
on
any
of
the
approaches.
But,
of
course
the
the
core
placement
is
critical.
G
F
F
As
I
tell
you
again,
this
depends
very
dependent,
depends
very
much
on
the
underlying
technology
as
well.
At
least
I
mean
we
don't
have
the
university,
we
don't
have
every
brand
of
switches,
but
we
have
the
problem
because
it's
difficult
to
share
that
pointer,
for
example,
of
different
open
flow
instances.
F
You
it's
not
so
easy,
so
I
think
that
at
the
point
there
are
some
times
that
at
least
our
experience
that
you
are
bounded
by
the
implementation
of
the
of
the
switch,
but
I
really
think
that
in
the
near
future,
for
example,
you
have
elements
that
would
give
you
some,
for
example,
performance,
isolation,
media
you
can,
through
the
one
part
of
the
switch
not
affecting
the
other.
That
kind
of
isolation
could
could
happen.
H
H
You
get
better
metrics
for
power,
consumption
and
speed
with
proprietary
hardware,
but
then
you
have
always
proprietary
variants
of
those
protocols
like
open
flow
or
even
p4.
We
had
high
hopes
in
p4,
but
we
did
some
implementation
and
turn
out
for
P
fear
P
for
equipment
a
we
had
to
use
completely
different
code
and
for
before
equipment
from
vendor
B,
because
they
have
just
different
kind
of
resources
which
get
exhausted.
How
you
are
coding
your
code.
So
so
you
can't
really
do
vendor-independent
stuff.
Today.
H
H
F
That's
one
of
the
of
the
the
problems
you
have
you
don't
have
the
I
mean
you
are
never
comparing
oranges
to
oranges.
That's
that's!
The
problem.
They're
moving
different
things
on
I
mean
if
you
are
running
things
over
the
same
server
platform,
866
I'd
say
on
whatever,
with
to
intermix
the
power
consumption.
The
CPU
occupation
is
something
that
you
can
really
compare
more
or
less
easily
for
a
given
solution,
but
is
not
easy
because
it
happens.
F
It
could
happen
what
you
say
that
perhaps
one
a
specific
piece
of
software
gives
a
better
result
that
another
one
or
some
specific
configuration
so
I
I
really
think
that
well,
I
really
think
that
from
the
point
of
power
consumption,
perhaps
some
activities
could
be
improved
if
your
users
using
switch
elements,
if
this
happens
to
be
before
in
the
future,
something
that
they
really
would
like
to
see,
because
that
could
be
a
lingua
franca
that
could
be
also
such
a
more
general
architecture.
That
could
be
easier.
That
will
be
easier,
but
the
language.
H
F
Both
vendors
implement
people
perfectly,
but
yes,
but
yes,
but
it's
a
problem
with
before
you
compiler
you
do
it
and
you
do
I
mean
you
can't
have
before
over
toughen
and
you
can't
have
before
I
understand
also
over
know
be
flow
and
bu5.
So
these
are
different.
I
mean
it's
more
or
less
like
see
what
they
see.
What
is
the
the
death
life
doing.
I
You're
trying
to
present
a
strategy
for
doing
NFV
where
portions
of
the
hard
decision
making
is
done
infrequently
on
on
the
actual
commitment,
general
purpose,
computer
hardware,
and
then
the
high-speed,
every
packet
processing
happens
and
the
more
special
purpose
hardware
crisis.
Do
you
see
any
value
in
researching
what
is
possible
to
do
in
that
special
purpose?
Hardware,
like
you
know,
impairment
I
mean
I'm
a
set
of
functionality
that
should
be
provided
in
order
to
meet
certain
news
cases.
Well,
I.
F
Think
that,
for
example,
I
think
it's
something
I
tried
to
mention.
I
think
that
the
facilities
you
have
nowadays
to
visualize
general
purpose
operating
system
of
a
general
purpose-
hardware
are
not
available
in
switching
I
mean
you
have
a
bunch
of
limitations.
If
you
want
to
do
that
at
a
certain
point,
I
will
not
name
the
brand
but
way
we
wanted
to
to
run
open
flow
instances
over
the
switch
there
were
no.
There
was
not
the
possibility
to
put
logically
a
switch
on
top
of
our
switch.
F
We
had
to
run
a
cable
outside
the
switch
for
entering
the
other
instance.
This
is
a
pain
disease.
This
breaks
the
programmability
of
the
approach,
so
that
was
something
that
could
be
done
at
the
research
level
was
to
think.
Also
the
switching
platforms
be
able
to
be
able
to
be
realized
to
be
able
to
learn
classical
instances,
several
overflow
instances
to
be
able
to
start
with
all
the
other
over
the
other
to
start
the
control.
They
have
been
research
done
about
that
and
I
think
that's
something
that
is
promised.
F
B
B
J
Okay,
you
probably
know
from
this
life.
No,
my
name
is
Pedro
Martinez
I
come
from
an
ICT,
Japan
and
I
will
present
some
efforts.
We
are
doing
in
terms
of
exploiting
a
set
of
measurements
from
different
places,
not
just
not
just
network
resources
in
order
to
actually
perform
the
last
iteration
of
this
be
more
flexible
systems.
No
too
dynamic
service
demands.
Well.
J
I
will
first
start
with
a
brief
description
of
the
motivation
or
rich
topic
of
this
work,
and
our
proposal,
of
course,
and
I
will
then
align
that
proposal
with
what
I
suppose
more
most
of
you
are
more
interested
at
least
align
that
with
the
NAB
management
and
operation
structures
and
finally,
I
will
conclude
and
give
some
hints
on
the
fit
work
next
piece.
First,
this
is
a
bit
obvious,
but
as
you
you
know,
we
we
are
in
in
a
world
where
research,
the
man
is
not
continuous.
J
We
have
a
high
variation
on
it
and,
but
normally
in
general,
in
the
past,
especially
not
today,
but
in
general
we
are
more
happy
having
fixed
versus
allocation.
They
are
easier
to
manage.
They
are
easier
to
decide
etcetera,
but
but
it
is
not
optimal
so
that
we
have
to
answer
it
with
some
kind
of
elastic
resuscitation,
as
we
have
phone
in
cloud
computing
in
other.
J
In
many
many
many
environments,
we
have
this
kind
of
elasticity
concept,
but
we
are
trying
to
even
go
further
in
that
sense,
because
normally
they
are
not
meeting
the
actual
demands
of
networking
in
terms
of
autonomic,
autonomously
etc.
This
so,
in
this
use
case,
I
will
show
how
do
how
can
we
deal
with
a
different.
J
No
sorry
not
different
well
how
to
deal
with
a
problem
in
the
network
that
is
happening
after
one
incident
in
axon
terminal
incident
happens
like,
for
example,
an
earthquake
and
how
to
actually
react
to
that
problem
before
actually
the
users
of
the
network
increasing
the
use.
In
this
case,
we
have
one
helpdesk
that
is
waiting
for
four
users
that
can
attend
when
something
happens,
but
in
that
sense
that
that
element,
not
elements
should
be
enlarged
in
order
to
avoid
some
service
disruption.
Okay,
please!
Next!
J
So
in
this
case
we
propose
to
build
a
domains
with
some
solution
based
on
their
B,
so
we
can
easily
increase
or
decrease
the
resources
assigned
to
the
to
every
every
part
of
the
system.
In
order
to
react
to
us,
as
I
have
mentioned,
to
changes
in
dynamic
changes
in
the
buyer,
especially
when
it
was
the
the
the
race
was
required.
Now,
the
number
of
uses
that
are
ten
in
one
service
is
increasing
or
decreasing,
that's
peaceful.
J
So
then
we
propose
to
introduce
one
engine,
then
that
is
interacting
with
every
part
of
the
every
controller
of
the
network
or
network
controllers
on
also
built
on
machines
controllers,
and
especially
in
this
case,
that
engine
will
also
interact
with
some
detectors
that
we
can
call
sensors
like,
for
example,
a
seismograph
that
is
MA
meter.
Sorry,
that
will
report
that
something
is
happening
outside
this
can
also
be
fed
by
big
data
information.
J
But
what
we
have
a
target
that
right
now,
it's
just
one
kind
of
external
detector
that
is
included
so,
and
we
have
called
this
the
autonomic
vs.
control
catheter
and
it
will
collect
the
that
kind
of
information
from
multiple
places
like
say,
have
mention
controllers
and
also
detectors,
and
then
it
will
analyze
it
in
order
to
find
out
what
is
happening
in
the
system.
After
that
it
will
adapt
the
resources
on
the
system
just
communicate,
communicating
the
new
set
of
resources
that
should
be
assigned
communicating
it
to
the
to
the
controllers.
J
In
this
case,
a
new
function
can
be
deployed.
You
know
in
a
new
place
in
order
to
to
balance,
know,
for
example,
the
demand
of
users
and
things
at
that
so
and
after
that
were
intention,
is
to
achieve
that
in
less
than
one.
Second,
that
means
that
to
adapt
the
system,
the
whole
system,
not
not
just
over
controller
or
things
like
that,
but
the
whole
system,
unless
what
that
one?
Second,
but
it
is,
it
is
difficult.
So
we
have
found
that
some
problems
like,
for
example,
we
have
too
much
of
situations.
J
They
had
no
real
lot
of
information
to
analyze
in
a
real
short
time,
so
we
have
to
to
filter
or
apply
different
techniques
to
to
decide
which
information
is
interesting
and
which
information
is
not,
and
also
we
have
to
apply
a
high-performance
controller
in
both
in
our
side
and
also
in
the
in
the
underlaying
control
side,
because
we
have
also
to
reduce
the
delay,
and
especially,
we
have.
We
are
now
researching
how
to
apply
learning
techniques,
to
anticipate
that
they
actually
anticipate.
J
J
J
Please
come,
and
it
will
also
separate
the
every
every
component
will
be
separated
in
a
major
series,
so
it
can
be
pluggable
and
of
course
it
can
scale
up
or
down
as
needed,
and
it
will
also
remark
the
closed-loop
effect.
That
means
that
after
one
decision
is
taken
that
this
controller
will
check
that
that
they
decide
objective
of
the
of
that
decision
are
being
applied
to
the
network.
Please
so,
finally,
it
is
it
really
supported
as
many
other
system
black
with
some
knowledge
base
and
supported
and
reinforced
with
the
common
ontology.
J
Please
and
now
I
will
go
to
the
to
the
most
interesting
part
and
how
this
is
related
to
nab
and
and
especially
the
management
and
operation
defined
by
H
cos
theta.
Well
what
we
place
our
engine
in
in
the
in
the
situation
of
the
beard,
oil
infrastructure
management
well,
and
we
have
explored
that
and
we
have
implemented.
J
Last,
like
I,
mentioned
the
this
kind
of
system,
o
meters
this
so
as
I
have
mentioned
this.
This
is
the
the
role
played
by
our
engine
and
also
that
we
have,
as
I,
have
mentioned
the
interface
it,
but
we
have
left
out
maybe
IBN,
f,
FM
interface,
because
this
is
out
of
the
scope
of
for
now
from
our
objective.
J
We
have
experiences
with
this
OpenStack
physical
deployment,
and
we
have
seen
how
mentioning
we
have
standard
the
the
interfaces
to
enable
the
interaction
with
external
air
elements
etc.
This
and
as
to
conclude,
we
have
just
designed
this
this
infrastructure
to
provide
the
functions
of
this
baam
and
we
have
standard
interfaces
and
we
have
achieved
good
performance
in
the
OpenStack
deployment.
I
have
just
mentioned,
and
we
in
using
that
deployment.
J
So
our
next
steps
will
be
to,
of
course,
keep
reducing
the
time
required
for
this
control
to
to
perform
the
analysis
and
also
to
increased
the
validation
scenario,
as
I
have
mentioned,
the
basic
use
case
to
increase
it
with
other
additional
use
cases
and
align
it
with
additional
requirements
from
NAB
Sdn
like,
for
example,
see
figure
out
how
to
integrate
the
missing
BIA
BNF
interface
that
we
have
to
provide
in
order
to
fully
integrate
our
approach
within
the
NAB
mano
scheme.
A
J
It's
not
out
of
scope
of
our
objective
is
the
outer
scope
of
our
current
target.
That
means
that
why
is
because
we
are
doing
the
the
we
are
building
name,
the
architecture
gradually
base
it
on
real
implementation,
and
we
are
leaving
some
some
interfaces
that
we
initially
can
consider
secondary
like
we
are
more
emphasizing
the
interface
with
the
orchestrator
with
we
thought
it
was
more
interested
to
have
that
before
having
the
other
interface
to
the
middle
layer.
B
J
Yeah
that
we
have
started
working
with
that
kind
of
solutions,
but
the
problem
is
especially
over
target
to
include
information
from
the
outside.
That
means,
like
our
final
target,
would
be
to
anticipate
to
some
event.
That
means
that
in
some
moments
we
will
include
also
big
data
information,
and
that
is
a
problem
with
current
fermentations
of
this
control
solutions.
So
for
now
our
main
target
is
not
actually
react
to
increases
in.
J
In
me,
the
load
know
of
some
element
know
is
to
react
to
something:
that's
happened
outside,
even
though
we
can
fail,
and
then
nothing
happens
like
for
sample
thinks
that
in
this
case,
in
this
use
case,
we
have
this
helpdesk
and
it
will
be
a
and,
if
something
like
a
strong
storm,
earthquake
or
tsunami
here,
something
that
happens,
it
is
supposed
that
many
people
will
contact
this
help
desk
to
get
help
to
inform
something
wrong
has
happened,
something
is
bad,
said,
yeah.
Well,
in
the
past,
we
have
a
we
can
see.
Some
experiences
will
help.
J
This
were
totally
broken
after
that,
because
we
were
overloaded
etc
and
it
is,
and
we
have
to
two
solutions.
We
react
and
add
resources,
or
we
just
that,
and
what
happens
in
this
solution
is
that
some
people
can
be
unattended
because
we
are
react
after
the
system
gets
overloaded
or
citta,
or
we
anticipate
to
the
load
and
we
increase
the
resources
before
that's
the
problem
that
current
solutions
are
not
able
to
provide.
Even
modifications
of
them
cannot
be
Havel
because
they
have
to
deal
with
information.
B
B
K
K
So
basically,
what
we've
seen
is
there's
a
lot
of
stuff.
You
can
do
a
lot
of
elements
involved
in
NFV,
but
the
observations
that
some
of
us
made
in
our
group
is
that
there's
this
quite
interesting
work
going
on
in
the
network
slicing
part,
but
often
the
architectures
presented
from
the
diagrams
had
all
the
nf
these
should
we
say
executing
in
one
data
center,
so
they're
a
bunch
of
attributes.
You
could
apply
to
the
network
part
that
weren't
always
maintained
in
the
elements
which
actually
ran
the
network
functions.
K
So
in
our
mind
there
was
some
kind
of
dilemma.
How
do
you
maintain
all
these
things
and
stuff
presented
in
these
slides,
tries
to
address
some
of
these
issues
and
tries
to
present
a
symmetric
model
and
also
some
level
of
abstractions
that
provide
a
level
of
consistency
and
control
to
maintain
a
web.
K
So,
overall
we
make
this
case
in
the
slides
and
the
work
that
we've
done
for
creating
VIMS
on
demand,
rather
than
having
a
single
vim
for
a
data
center.
So
basically
we
suggest
you
should
be
able
to
create
a
data
center
slice
as
easily
as
you
can
create
a
network
slice
and
that
if
you
create
this
data
center
slice,
then
this
new
slice
should
actually
have
its
own
VIN,
not
the
one
that
was
originally
came
with
the
data
center
and
later
on,
we'll
show
some
of
that
and
some
of
the
architecture
elements.
K
L
B
B
K
M
K
Then
we
have
the
orchestrator
with
the
representation
of
the
slice
and
in
essence,
it
wants
to
deploy
a
service
on
to
that
slice
whatever.
That
means-
and
we
have
something
where
there's
this
slight
representation
of
a
slice
in
the
orchestrator
and
we
actually
have
the
slice
elements
there.
Some
data
center
parts,
there's
some
network
parts
and
the
little
green
squares
present
the
vnf
star
part
of
this
whole
representation.
These
run
on
the
physical
resources
inside
the
data
send
to
the
network.
K
Okay.
So
what
we've
seen
is
yeah
there's
many
slicing
models
and
they
sliced
the
network.
But,
like
I
said
earlier,
the
NFV
elements
happen
to
be
scattered
across
the
data
center
board.
We
mean
by
scattered.
Well,
it
could
mean
a
whole
bunch
of
things
based
on
various
policies,
requests
to
the
vim
from
the
orchestrator.
In
essence,
the
data
center
is
one
big
fat
shared
resource
and
it
doesn't
maintain
the
same
attributes
that
you
could
apply
to
the
network.
K
And
yes,
there
are
whole
bunch
of
mechanisms
for
creating
sliced
parts
of
the
network,
and
part
of
this
is
already
being
done
in
the
ITF
and
so
I
Triple
E.
It's
all
coming
on
next
one
please
and
then
what
we're
saying
is
well,
we
should
be
had
the
same
capabilities
inside
the
data
center.
We
should
be
able
to
do
slicing
with
some
attributes,
and
once
we
have
these
parts,
I've
got
some
networks.
Sliced
parts
I've
got
some
data
center
sliced
parts.
K
We
have
to
connect
all
the
bits
to
represent
the
single
slice
end-to-end
spanning
some
geography.
How
this
is
done.
It's
not
always
obvious
in
all
the
cases,
but
in
essence
it
needs
to
be
done
properly
for
full
network
slicing
and
service
delivery
to
happen.
Well,
so
just
want
to
go
through
a
few
ideas.
Definitions
of
what
these
elements
are.
So
we
say
a
DC
slice,
it's
an
abstraction
over
the
data
center.
K
Okay
and
it
represents
a
collection
of
resources
of
the
data
center
and
it
can
be
importantly,
controlled
and
managed
separately
from
the
other
slices
inside
the
data
center.
Much
like
the
network,
slice
parts
are
managed
independently
of
the
other
Network
slices,
okay,
and
this
is
different
from
having
one
big
fat
data
center
right.
K
We
also
say
well,
once
you
have
this
now
it
becomes
a
basis
for
the
control
in
these
virtualized
elements.
I
now
have
some
underlying
system.
I
can
now
put
my
own
virtualized
bits
on
top
of
it,
okay,
more
excitingly,
we
can
say
well,
the
slice
isn't
fixed.
Now
it's
soft
and
dynamic.
It
can
also
be
elastic
because
I
have
some
control
over
it.
I
can
ask
someone,
please
make
my
slice
bigger
data
center
slice
bigger.
K
Please
make
it
smaller
to
match
demands,
because
the
data
center
will
be
bigger
than
the
slice
itself,
and
hopefully
the
data
center
will
have
enough
spare
resource
to
allocate
a
little
bit
more
or
take
some
from
us
as
needed.
Ok,
next,
please,
ok!
So
what
next
now
we're
kind
of
coming?
Oh!
This
is
quite
interesting.
K
Well
because
the
vim
is
going
to
be
allocated
on
demand,
we
can
actually
choose
any
kind
of
virtualization.
We
want
we're
not
fixed
by
what
the
guy
at
the
data
center
chose
to
deploy
when
he
picked
up
his
favorite
bit
of
software,
you
might
go
well.
I
want
OpenStack
and
I
want
this
module
that
module
and
I
want
to
do
it
like
this.
Well,
if
I'm
allocating
my
own
slice
and
I
have
my
own
vim
I
can
actually
choose
what
kind
of
mechanism
I
won.
K
Not
that
thing,
because
that's
the
one
I
want
now
that
gives
you
a
very
different
model
and
use
a
lot
of
flexibility,
and
the
other
thing
is
that
if
the
system
set
up
in
the
right
kind
of
way,
then
because
it's
your
vim
for
you
to
use
over
your
slice,
your
hosts,
you
can
control
some
of
it,
obviously
not
all
of
it,
but
there's
a
whole
bunch
of
stuff
you'll
be
able
to
do
with
the
vim
for
your
slice
that
you
can
never
do
at
the
moment.
Okay,.
K
So
what
can
you
say?
We're
saying
vim
needs
to
be
allocated
on
demand
for
each
slice,
but
we
need
some
kind
of
management
component
that
can
actually
allocate
the
slice
and,
as
a
consequence,
the
relevant
vim.
The
slice
owner
can
manage
configure
control
their
own
vim
and
that
them
could
be
any
kind
of
technology.
K
That's
out
there,
and
we
see
that
if
this
is
deployed,
the
actual
data
center
itself
can
actually
have
a
catalog
of
ins.
They
can
tell
you
we
are
able
to
deploy
this
kind
of
vim
for
you
from
a
pre-existing
catalog,
because
the
VINs
can
can
themselves
just
run
in
virtual
machines,
ready
to
go
to
be
instantiated.
K
Okay.
Now
the
next
point
is:
will
some
of
these
VIMS
are
quite
big,
and
this
is
a
kind
of
secondary
issue.
If
you
allocate
a
slice
on
demand
and
you
then
allocate
vim
where'd,
you
put
the
vim,
and
this
is
some
secondary
work
that
we've
done
that
isn't
part
of
these
slides,
but
the
vim
elements
themselves
needs
to
be
placed
somewhere
and
there's
a
good
chance
in
a
commercial
setup.
You'll
be
billed
for
your
vim
because
it's
going
to
be
using
some
resource
next,
please.
K
E
You
actually
assumed
that
you
needed
vim
her
each
slice
and
I.
Don't
really
understand
why
this
is
the
assumption,
for
example,
you're
thinking
about
regular
network
and
we
have
multiple
VPNs.
So
do
we
have
a
network
per
VPN?
Now
we
have
one
network
which
is
shared
between
multiple
VPNs
right
so
same
here.
E
One
beam
can
be
shared
by
multiple
slices,
so
why
do
we
need
him
to
be
allocated
for
each
slice
and
in
my
opinion,
and
again
you
may
get
to
that
later
as
much
as
we
make
the
slice
awareness
at
the
upper
levels,
we
get
it
much
more
simplified
in
the
lower
levels,
but
ok,
but
maybe
you
can
start
just
clarify
the
assumption
here.
Oh
okay,.
K
That's
it
and
that's
the
one
end
of
the
spectrum,
and
now
you
have
these
data
centers.
Where
there's
the
vim.
Some
of
my
Amazon
provides
Google.
You
talk
to
one
entry
point
you
say,
give
me
some
stuff
and
that
kind
of
of
the
spectrum
and
we're
saying
there's
an
intermediate
point
for
certain
resources:
some
telecoms
resources,
IOT
situations,
certain
network
services,
where
you
want
this
intermediate
model
and
it
suits
you
so
many
people
it
doesn't.
K
If
you're
deploying
one
web
server
and
one
database
for
a
small
application,
you
just
don't
need
it,
and
this
this
kind
of
scenario
isn't
relevant,
but
there's
our
others.
You
know
speak
to
a
lot,
the
telecoms
guys
in
some
of
our
projects,
this
kind
of
situations
ideal
for
them
where
they
want
the
data
center.
They
want
to
have
some
kind
of
flexibility,
but
they
also
want
to
be
isolated
from
the
others.
Yeah
some
extent
yeah.
Okay,
it's
like
please!
K
K
Please
give
me,
give
me
a
slice
of
this
size,
and
can
you
please
allocate
a
VIN
of
a
particular
type,
so
the
slice
control
the
range
for
that
once
it's
done,
you'll
get
a
handle
back
on
your
VIN
for
your
slice.
The
important
thing
is
that
in
your
slice,
your
vim
cannot
manage
hosts.
That
are
part
of
other
slices,
they're,
isolated
and
controlled
yeah
and
the
slice
controller,
because
it's
the
element
of
control
it
can
also.
It
is
the
one
that
has
the
capabilities
to
add
new
hosts.
K
D
K
Nothing
prevents
it
I
guess:
yeah
yeah
I
mean
that's.
A
model
was
he'd,
probably
want
one
vim
for
a
slice,
because
the
slice
could
be
is
as
two
hosts
or
you
know,
I
mean
it's
pretty
wouldn't.
But
if
I
said
please
allocate
me
a
very
small
slice
would
I
allocate
to
VIMS
I,
don't
know
I
mean,
but
there's
no
I
mean
that
might
be
outside
my
head
that
scenario,
but
in
reality
there's
nothing
to
stop
there.
Yeah.
K
I
think
you'd
allocate
two
independent
resources
with
the
different
things
and
then
rely
on
the
other
technologies
for
the
connectivity,
yeah.
Okay,
next
one
please
so
in
the
next
few,
slides
I'll
show
a
kind
of
pictorial
structural
view
of
what
we're
aiming
for
actually
built
gone.
Please
yeah!
So
we
say.
Currently
we
got
this
one
vim
Purdy
see
this
is
kind
of
snore
you
have.
You
have
them
talks
to
all
the
hosts
and
what
happens
is
as
we've
seen
over
time.
K
The
vim
gets
more
and
more
functionality
to
do
more
and
more
stuff,
and
then
people
go.
This
module
doesn't
well,
they
updated
that
module,
but
that
doesn't
work
and
I
don't
need
the
vim
to
do
this.
Oh
you
need
no
first,
that's
that's
what
happens
at
the
moment
and
then
what
you
have
is
well
I've
got
this
one.
Vim
people
try
and
deploy
services
over
it
and
because
you
mostly
have
very
little
control
the
service
elements
via
Neffs,
whatever,
which
are
colored
in
yellow
and
green.
K
This
this
host
here
has
two
functions
in
it
and
there's
no
isolation,
and
these
guys
might
really
want
it.
Okay,
next,
please
so
one
mod
lists
say
well
well
extend
the
vim
to
do
some
kind
of
guaranteed
staff,
make
it
look
a
bit
like
a
slice,
interesting
model
and
I
think
there
are
elements
of
OpenStack
that
will
try
and
do
this
to
isolate
the
things.
K
But,
of
course,
you're
still
stuck
with
this
issue
that
you've
got
one
vim,
which
is
fine,
whether
you've
added
even
more
functionality
to
a
thing
that
already
has
a
lot
of
functionality
and
the
person
that
requests
the
slice
still
has
no
control
over
any
of
the
policies
any
of
the
control.
It's
still
someone
else's
issue,
okay,
so
this
is
why
we
say:
let's
try
and
go
for
this
model,
one
vim
per
slice,
and
we
have
this
flexibility
of
being
able
to
control
it
as
the
customer
and
so
here's
a
representation
from
earlier.
K
The
little
blue
blob
with
the
yellow
slice
represents
this.
So
you
see
L,
we
built
a
proof
of
concept
of
this
slice
controller.
We
use
some
technologies
lightweight
technologies,
virtualization
just
to
demonstrate
that
actually
works
are
built.
Sliced
controller
has
a
rest
interface.
We
can
make
requests.
K
So
it
kind
of
looks
like
this:
we
have
this
slice
controller.
It
actually
has
a
catalog
of
all
the
physical
resource
and
when
you
make
request
some
slice
allocates
to
them.
As
we
said
yeah
and
as
you
keep
doing
stuff,
you
get
more
slices
more
VIMS,
but
of
course
we
can
see
that
the
elements
are
genuinely
isolated
from
each
other.
So
now,
just
to
finish
off,
I
want
to
give
a
more
of
a
overall
high-level
view.
Where
do
we
go
with
this?
Okay?
K
So
now
we
have
this
slice
that
encapsulates
network
parts
and
DC
parts,
and
it's
connected
yep.
Next,
please
and
we've
got
bins
for
the
data
center
part
no,
but
nothing
for
the
network
part.
This
is
my
observation.
As
my
background,
I
used
to
teach
operating
systems
and
compilers
I
noticed
that
there's
no
element
for
talking
to
the
network
aren't
talk
to
the
data
center,
there's
no
equivalent
for
the
network
part,
so
we
say
well.
K
Maybe
we
have
an
equivalent
might
be
quite
simple
to
start
with,
but
maybe
it's
there
because
we
can
do
something
with
it
next
one
please.
So
here
I
present
more
of
a
stack
of
abstractions
of
what
we
can
do
at
the
bottom
layer.
We
have
the
resources,
the
data
center,
the
network
parts.
We
see
my
slice
controller
that
can
allocate
new
slices
and
it
gives
you
varies
so
the
slice
control,
the
stalks
of
DC.
You
get
a
slice
partition
of
the
data
center.
K
We
already
have
some
mechanism
for
taking
the
network
and
getting
a
slice
part
I,
don't
know
it's
called
that,
but
it
can
be
for
the
data
center
part.
We
have
them
for
the
network
part.
Some
equivalent
element
may
be
some
exist
at
the
moment.
I'm
not
100%.
Sure
people
can
correct
me
if
I'm
wrong,
but
then
we
can
go
okay,
so
I've
got
my
network
slice.
Part
I've
got
my
DC
slice.
K
Part
I've
got
my
vim
for
talking
to
my
DC
slice
and
I
got
my
little
Nimbus
and
now
I
can
have
a
full
representation
as
an
abstraction
of
all
these
parts
inside
the
orchestrator
and
I
have
a
point
of
control
and
interaction
with
all
these
elements.
Okay,
next,
please-
and
that
means
the
orchestrates
can
now
deploy
on
this
full
sly
aggregate.
It
slice
by
talking
to
these
elements
to
get
stuff
done.
These
can
act
as
control
points
and
measurement
points
getting
stuff
back,
please
yeah,
okay,
and
then
we
have
once
we
have
deployment.
K
We
can
see
the
things
being
deployed,
okay,
getting
towards
the
end
now
next
so
conclusions.
So
yes,
there
are
some
scenarios
where
we
believe
it's
quite
important
to
have
these
separate
slices
presented
the
case
for
why
we
think
it's
useful
and
beneficial
and
showed
some
of
the
elements
and,
as
you
just
saw,
there's
some
kind
of
layered
abstractions.
K
You
can
build
up,
on
top
of
this,
to
give
you
a
more
functional
system
overall
and
so
yeah
bigger
picture
for
control,
and
it
is
appropriate
for
use
where
you're
kind
of
using
clouds
for
networks
when
things
were
separate.
Some
of
these
models-
people
thought-
maybe
not
so
much,
but
now
there's
this
idea
of
NFV
and
isolation
and
slicing.
We
believe
this
is
a
really
important
way
forwards.
Thank
you,
yeah.
So,
finally,
yeah
there's
a
lot
of
further
work,
particularly
in
terms
of
standardization
api's
mechanisms
for
interaction.
L
L
N
L
K
We
see
that
if
well,
because
I
didn't
bring
my
other
slides,
it's
probably
it's
hard
to
say,
but
in
essence
we
saw
that
the
vim
that
you
allocate
for
the
slice
is
in
essence
part
of
the
resource
that
you're
utilizing.
So
if
you're
the
customer,
the
data
center,
you
ask
for
the
resource
for
the
slice,
but
your
vim
that
gets
allocated
is
yours
as
well.
So
in
essence,
that's
something.
N
K
K
A
H
K
N
That
was
essentially
the
leading
questions
is
the
actual
question
that
I
wanted
to
ask.
If
you
have
several
Vince
VI
amps,
which
are
running
in
the
infrastructure
shouldn't
you
be
somehow
tweeting
the
question
of
conflicts,
if
two
wins
can
essentially
handle
the
same
resource,
which
ultimately
will
be
the
case
in
this
model.
Unless
you
do
some
kind
of
hard
slicing,
which
you.
K
O
Actually
follows
address,
so
the
comment
is
more
now:
if
we
introduce
in
this
picture
the
user
of
a
slice-
yes,
multiple
slices,
which
is
sorry
for
using
that
term,
that
the
tenant
right,
yeah,
so
I
understand
advantage
here
is
that
you
expose
more
control
to
the
user
of
the
slice
to
organize
it.
Slice.
Yes,.
M
O
Then
I
would
rather
say
that
one
whimper
tenant
is
what
you're
looking
for
not
one
room
for
a
slice,
because
I
think
that
the
tenant
can
use
one
single.
Then
it
doesn't
need
to
have
multiple
them
so
coordinate
to
manage
its
licensing.
I.
K
Guess
it
depends
who
who
is
requesting
the
slice
if
you're
a
telecoms
company
and
you're
doing
some
kind
of
end-to-end
application?
You
might
allocate
the
slice
for
the
telecoms
guy
on
the
basis
that
they
may
have
clients
who
are
the
tenants
of
that
slice
yeah
so
that
you
actually
get
a
different
layering
of
who's.
The
customer
who's
the
tenant.
At
the
moment,
when
you
have
one
big
data
sense,
we
one
then
the
tenant
is
the
telecoms
guy
or
even
the
end
user,
because
he's
the
one
that
makes
the
request
in
this
model.
K
O
A
O
The
customer,
but
as
author
also
said,
I
also
assume
that
the
Vienna
apps,
already
in
apps
of
multiple
tenants,
are
on
board
on
this
same
shared
infrastructure
of
physical
infrastructure.
So
you
need
some
level
of
coordination,
so
you
need
a
kind
of
master
of
in,
in
that
case,
to
coordinate
with
just
you,
a
white
or
a
sort
of
conflicts.
O
K
P
Peter
Ashwin
suppose,
while
we
I
think
I'm
asking
or
making
the
same
comment
is
the
first
question
or
a
slightly
different
way,
but
optimisation.
If
you
sort
of
partition
into
different
subsets
and
individually
optimized
across
those
subsets
and
try
to
combine
the
result,
you
don't
get
an
optimum
Rea.
You
need
a
flat
set
of
resources
to
to
optimize
over
so
I'm,
just
the
point
being
that
you
could
never
actually
fully
optimize
all
the
resources
within
the
entire
data
center
right.
It's
recursively
optimized
within
each
one.
Is
that
what
you're
getting
at
the.
P
A
K
And
in
fact,
someone
else
asked
me
a
question
of
a
few
weeks
ago.
They
were
like
couldn't
this
possibly
use
more
resource,
because
you
pre
allocate
and
do
stuff
from
the
answers?
Yes,
you're
now
trading
off
a
flexibility
and
control
and
policies
that
you
pass
up
to
the
customer
that
they
never
had
before
versus
some
potential
extra
resource
usage
and
that's
a
trade-off
for
the
customer
because
they
may
go
I,
don't
want
to
spend
more
money
by
having
my
own
boxes
and
they
go.
That's
fine.
K
Q
Q
K
Again,
nothing
obligatory
we're
saying
that
here
is
I
model,
the
extends
what's
there.
If
you
want
to
have
three
slices
that
happen
to
have
three
fins
and
you
want
to
put
stuff
on
top.
That's
up
to
you,
we're
just
saying
here's
an
extra
level,
here's
some
more
stuff
once
you've
got
your
handle
on
your
Vin's,
your
the
application,
guy
you're,
the
custom
you
do
whatever
you
want
and.
Q
This
and
that,
but
my
point
is
so
we
can
they
say.
Oh
you
talk
about
this
slicing.
The
data
center
have
two
separate
games.
Yeah
now
the
question
I
have
here
is
what
is
very--
under
north?
Are
you
envisioning
if
some
kind
of
things
is
driving,
or
even
talking
about
some
kind
of
orchestration
driving
that
I'm
start
doing
the
more
VMS
slicing?
Because
if
you
think
of
the
orchestration
to
build
those
dreams,
okay,
fine,
we
can't
be
all,
can
do
it.
The
the
matter
is
going
to
became.
Q
K
G
Just
complement
what
he
was
asking,
it
was
a
difficult
presentation
to
me,
especially
well.
What
was
seem
to
be
missing
from
my
perspective
was
that
relationship
to
the
monastic
and
if
you
are
able
to
show
how
this
relates
the
monastic
I
think
you
are
answering
his
question.
But
what
is
on
top
of
women?
How
the
beam
is
in
in
depth,
monastic
in
relationship
to
orchestrate,
to
run
and
and.
K
The
relationships
authentically
doesn't
change
in
those
relationships.
The
only
thing
it
changes,
which
you
don't
see
in
the
man
Oh
specification
is
this
idea
that
the
vim
is
dynamic
in
the
man.
Oh
spec,
you
look
at
it
and
there's
one
of
them
and
the
assumption
is,
it
exists
in
advance,
there's
no
concept
of
it
being
dynamic
at
runtime
and
being
elastic.
K
The
other
stuff
stays
the
same.
All
we're
saying
is
this:
vim
can
be
allocated
at
runtime
and
you
will
get
the
the
layers
above
will
get
a
handle
on
it
at
runtime.
You
don't
have
to
know
about
it
in
advance.
What
you
do
need
to
know
about
in
advance
is
the
slice
controller,
because
you
have
to
have
an
element
to
talk
to
to
then
allocate
the
resource
and
the
vim
to
get
a
handle
on
it.
So
so
a
lot
of
the
stuff
in
the
slides
is
all
these
things
are
dynamic.
K
P
Okay,
Peter
Ashwin
Smith,
while
we
again
I'm
thinking
thinking
out
loud
here.
So
hopefully
this
will
come
out
correctly
if
you've
sort
of
physically
isolate
a
set
of
logical
data
centers.
So
the
resources
are
isolated
if
they,
if
their
output
and
input
aggregates
into
a
statistical
multiplex,
how
can
you?
P
How
can
you
trade-off
between
the
two
through
that
pipe?
If
you
don't
know
what's
going
on
within
those
individual
data
centers,
doesn't
it
imply
that
you
would,
in
order
for
this
model,
to
work
and
end
that
you
would
have
to
have
actual
physical
boundaries
for
the
bandwidth
between
the
two
data
centers?
In
other
words,
TDM
or
DWDM
I,
don't
know
the
answer
I'm
just
this.
P
K
A
bunch
of
others,
I've
been
told
as
well,
and
so
the
actual
technology
of
the
network
slice
is
not
a
feature
of
raid
this
way.
So
this
is
the
idea
of
why
do
we
have
this
NIM,
this
element
to
talk
to
the
network,
because
you
need
an
abstraction
point
for
the
higher
level
up
to
isolate
the
network
slice,
what
we
say:
actual
technology
I.
A
P
K
P
The
purpose
of
slicing
is
not
necessarily
to
guarantee
complete
isolation
across
all
the
resources.
It's
it's
to
create
that
illusion
right
so
that
it
things
work.
So
they
get
the
right
Keowee
in
the
QoS.
If
you
have
to
do
that
by
physically
isolating
all
the
resources,
you
may
as
well
just
build
a
multiple
parallel
networks
at
n
times
the
cost.
The
idea
is
to
get
statistical
game
through
cloud
and
cloud
technologies
wherever
you
can,
so
it's
not
just
physically
isolate
everything
in
a
way
you
go
that's
sort
of
the
easy
answer.
P
K
P
K
Mean
similar,
like
saying,
will
I
run
multiple
TCP
streams
across
the
network
right?
What
what
do
I?
What
does
the
guy
out
there
have
to
know
about
it?
He
doesn't
are.
Are
right,
programs,
open,
TCP
connections,
stuff
goes
the
application,
never
sees
it
right,
a
certain
level
of
layering
and
abstraction
the
guy
below
deals
with
it
and
it
works,
or
it
doesn't
yeah
I
think
it's
only
true
in
this
model,
there's
a
certain
layer
low
down
through
various
controllers.
You're,
not
gonna,
see
as.
P
K
A
R
R
The
needed
part
now
in
I
hope
that
that
answered
the
previous
question.
At
the
end
of
the
day,
the
slides
are
not
for
full
isolation.
This
could
be
done
many
away,
but
one
of
the
issue
here
is
that
we
are
moving
out
from
a
very
large
data,
centers
Micro
data,
centers
or
network
data
centers
or
data
center,
at
the
edge
where
actually
most
of
the
functionality
could
be
put
in
including
Wiltjer
functionality.
R
K
K
M
B
K
I
think
on
the
time
scale
over
which
you
want
your
slice,
if
you
think
everyone
says
CDN,
right,
I
want
to
run
CDN
to
here
and
there
I'm,
assuming
it's
not
going
to
run
for
a
few
seconds.
Your
time
scale
over
which
you
want
to
maintain
this
bits
of
slice,
it's
quite
big.
So
if
you're
prepared
to
wait
some
minutes
to
have
some
resource
for
days
or
weeks
or
months,
it's
a
miniscule
percentage
of
the
lifetime.
K
If
you
think
my
Alex
said
there,
it's
going
to
be
at
the
edge
and
some
people
are
going
to
be
there
and
we
need
it
for
half
an
hour
and
we
just
want
to
allocate
it
because
some
people
are
learning
to
go
away.
Then
yeah
opens
that
starting
opens.
That
would
be
a
significant
portion
of
the
time
at
which
you
have
the
slices
yeah,
the
resources.
So
yes,
some-
and
this
is
why
I
said
to
Alex
it's.
This
is
a
model
for
doing
stuff,
not
a
particular
implementation.
K
B
K
B
D
B
K
K
B
Other
thing
is
that
they
weren't
trying
to
mix
and
match
whether
the
weather
was
presenting
at
the
beginning.
I
was
worried.
You
see
here
because
I
mean
I,
see
them
the
both
approaches.
Moving
in
completely
opposite
directions,
I
mean
how
do
could
try
to
implement
something
like
the
rethinking.
Eduardo
was
mentioning
about
pushing
functionality
down
the
down
to
the
switching
fabric,
the
other
one
hand,
because
right
now,
if
you
have
separate
beams
each
one
of
those
beams
will
deter
into
a
Sdn
controller.
B
K
Mean
no
I
have
an
opinion
I'm
glad.
You
asked
that
question
because
when
he
presents
you,
these
slides
are
and
when
you're
saying
oh
well,
this
bits
difficult,
that's
difficult.
The
OS
guy
and
me
was
like
that's
because
you
don't
have
enough
abstractions,
that's
because
you
haven't
abstracted
away
from
the
device
to
the
app
the
control
element.
You're,
getting
this
guy
to
talk
directly
to
the
device
in
the
OS
level.
That's
the
device
driver.
You
need
the
next
level
up!
K
That
does
the
intelligent
stuff
to
talk
to
a
piece
of
software
that
talks
to
the
device.
So
the
bit
where
Eduardo
said,
I've
got
a
bit
network
function
in
the
switch
itself,
fantastic.
What
you
don't
have
is
a
software
representation
of
that
useful
function.
That's
managed
by
something
else,
because
the
bit
he
bits
he
presented,
but
really
fantastic
I
mean
it's
like.
Yes,
let's
represent
some
network
function
in
this
device,
or
is
that
device
it's
a
device
where
you
send
the
sequence
of
instructions
and
say
please
do
this
well,
this
is
no
different
from
writing.
K
Some
software
on
my
laptop
server
and
the
compiler
generates
the
instructions,
and
the
process
is
the
abstraction
that
manages
that
runtime
instance.
So
that
implies
that
in
just
slice
that
would
include
switches,
you
would
theoretically
have
an
abstract,
an
abstraction,
a
software
element
that
could
fit
in
to
talk
to
link
and
then.
B
K
K
E
Say
that
I
share
a
lot
of
comments
that
were
raised
here
in
the
home
and
also
when
I
heard
you
guys
the
same
feelings
about
who
knows
what
say
it
looks
to
me
very
much
button
may
approach
and
I
think
networks
slicing.
We
should
really
look
at
that
top-down
approach
to
understand
if
from
even
starting,
even
with
the
business
here,
it
understand
there
what
is
Isis
again.
There
was
a
comment
that
they
are
not
in
order
to
make
isolation,
but
it
has
the
purpose
to
create
their
thirst.
Every
stay
load
network
or
whatever
is.
A
E
Quality
of
service,
of
course,
and
they
we
need
to
understand
what
is
the
relationship
here
between
the
different
entities
and
then
to
know
who
can
control
what
and
hopefully,
in
a
good
design.
The
resource
level
is
not
aware
of
the
slice.
It
just
provide
the
resources
all
of
that
is
handled
in
much
higher
level.
So
there.
K
E
Detailed
solution:
it's
not
top-down
approach,
because
you
already
decided
where
the
slice
management
is
and
where
is
the
orchestrator
or
whatever
you
already
created
the
hierarchy
unique
understand
top-down,
how
the
function
from
the
functionality
point
of
view
and
then
understand.
Who
is
doing
what,
for
example,
what
I
am
missing
here
is
the
business-to-business
gateway,
for
example,
because
network
slicing
is
between
different
entities.
So
a
lot
of
things
are
done
and
the
end
to
end
and
then
what
is
done
into.