►
From YouTube: IETF110-NMRG-20210308-1600
Description
NMRG meeting session at IETF110
2021/03/08 1600
https://datatracker.ietf.org/meeting/110/proceedings/
A
B
We
will
start
on
time
because
we
have
a.
A
Quite
packed
agenda
and
I
would
like
to
give
as
much
as
possible
time
to
all
the
presenters
to
to
deliver
the
representations,
so
this
will
be
a
general
mark
for
everyone,
both
prisoners
and
the
participants
to
try
to
really
stick
to
the
timing.
A
Please
be
careful
with
the
time
when
presenting
and
if
you
have
questions
or
comments,
please
raise
them
via
the
chat,
and
I
will
try
to
to
address
as
many
questions
as
we
can
during
the
meeting.
Otherwise,
we
will
bring
that
offline
to
the
mailing
list.
Okay,
thank
you.
So
this
is
the
16th
an
emergency
meeting
as
part
of
the
iatf
110
meeting,
which
will
be
running
online
this
week.
So
welcome
everyone.
I
am
laurence
avalia
co-chair
of
energy
and
I
think
jerome
is
also
online.
My
other
culture
say
hello.
A
Formal
announcements
to
make
so
we
have
a
couple
of
slides
on
noteworth
concerning
intellectual
property
and
participation
to
irtf.
So
this
is
a
reminder
of
iatf
and
rtf
policies,
in
effect
on
various
topics
such
as
patents,
code
of
conduct,
and
it
is
only
meant
to
point
you
in
the
right
directions.
There
are
different,
rfcs
and
best
current
practices
that
you
can
look
into
if
you
want
to
have
more
information.
A
But
the
main
aspect
is
that
the
iotf
follows
the
iatf
intellectual
property
rights,
disclosure
rules.
So
by
participating
in
the
iitf,
you
agree
to
follow
rtf
process
and
policies
in
particular.
If
you
are
aware
of
any
rtf
contributions
covered
by
patents,
you
should
declare
that
the
ihf
expects
that
you
file
such
ibr,
the
closer
in
the
timely
manner
for
the
rfc
documents
that
the
irtf
publishes
rtf
prefers
the
most
liberal
licensing
terms
possible
and
so
yeah
further
references
for
patterns
and
participation
in
the
following
links
concerning
privacy
and
code
of
conduct.
A
A
A
Management
research,
school
of
the
rtf
internet
research
task
force
the
rtf
conducts
research.
It
is
not
a
standards,
development
organization,
the
irtf
focuses
on
longer-term
research
issues
related
to
the
internet,
while
the
parallel
organization,
the
ietf,
focuses
on
shorter
term
issues
of
engineering
and
standards
making.
A
So
while
the
irtf
can
publish
informational
and
experimental
documents
in
the
rfc
series,
its
primary
goal
is
to
promote
the
development
of
research,
collaboration
and
teamwork
in
exploring
research
issues
related
to
internet
protocols,
applications,
architecture
and
technology,
and
if
you
want
to
add
more
information
about
how
iotf
works,
you
have
a
nice
rfc
that
you
need
a
quick,
also
practical
information
for
the
for
this
meeting.
So
the
screen
is
being
recorded.
A
There
will
be
an
automatic
registration
of
participants
once
you
are
logging
into
this
session,
if
you're
not
presenting
or
commenting,
please
keep
your
audio,
muted
and
potentially
the
video
off.
This
helps
also
for
the
bandwidth
management.
When
speaking,
please
state
your
name
and
any
affiliation
you
would
like
to
to
disclose.
This
is
useful
also
for
the
minute
takers
and
you
have
a
set
of
useful
links,
and
I
hope
that
if
you
are
here
connected,
you
already
have
those
information
readily
available.
A
If
you
want,
to,
I
mean,
say,
extend
the
experience
beyond
this
energy
session,
please
be
informed
that
there
is
also
a
gather
social
platform
that
is
made
available
by
ietf
for
the
whole
ietf
meeting
quick,
and
you
can
have
some
nice
interactions
I
mean
before
and
after
the
sessions
with
some
avatars,
and
so
it's
a
nice
platform
you
may
try
to
to
test
it.
It's
it's
very,
very
interesting
and
sometimes
also
the
chairs
rip
their
own.
A
A
We
still
have
to
to
figure
this
out
with
the
the
organizer
of
the
conference,
another
interim
in
june,
and
the
next
planner
meeting
should
potentially
be
with
iatf
111
and
we
don't
know
yet
exactly
if
it
will
be
on
site
in
san
francisco
or
it
will
be
online
and
the
decision
will
be
made
in
april
16th.
A
So
that's
it
for
the
quick
news
about
the
working
group.
There
is
a
research
group.
Sorry,
quick
overlook
at
the
agenda,
so
we'll
start
with
some
updates
quick
updates
on
some
of
the
active
documents
of
the
research
group
they're.
Not
all
the
documents
are
here
for
some
of
the
documents
we
invite
you
to
exchange
first
on
the
mailing
list,
and
then
we
have
two
sessions
of
technical
talks,
one
on
intent-based
networking
and
the
second
one
on
ai
for
network
management,
with
a
focus
on
some
of
the
challenges
of
the
ai
research
challenges
document.
A
So,
as
you
see,
it's
quite
packed,
so
we'll
start
right
away
with
shane
and
I
will
present
the
slides
for
you.
D
D
D
D
D
It
has
to
be
wide
adopted
in
the
industry
field,
so
one
of
the
examples
like
3d,
printing
and
computing
and
design
actually
so
here
we
really
want
to
you
know,
apply
this
digital
twin
concept
into
the
network
field,
so
the
goal
actually
is
we
really
want
to
build
the
digital
twin
platform
for
more
efficient
and
intelligent
management,
and
also
we
really
want
to
use
leverage
these
to
drive
the
innovation
with
a
more
optimized
life
service
life
cycle,
so
technical
contribution
from
the
author
actually,
the
first
days,
we
sketched
a
base
reference
architecture
based
the
key
elements
we
identified
in
the
digital
twin
concept
and.
C
C
D
Yeah,
so
for
digital
twin
network
composition,
actually,
we
identify
five
key
elements:
number
one
interface.
Actually,
we
identify
two
type
of
interface.
One
is
a
service
interface
which
will
be
used
between
the
application
and
a
digital
network
platform
and
which
can
be
used
to
get
access
the
data
and
build
the
application
and
invoke
the
capability.
D
The
second
is
telemetry
interface
that
will
be
defined
in
the
interface
between
the
digital
twin
platform
and
the
physical
network
that
can
be
used
to
populate
the
data
and
catch
the
data
for
further
data
processing.
So
number
two
data,
actually
the
the
data.
Actually
we
actually,
we
can
connect
from
the
underlying
physical
network
to
use
to
represent
and
understand
the
state
and
behavior
of
the
real
world
thing
number
three
models.
D
Actually,
this
model
actually
based
on
the
physical
network
simulation
and
not
only
you,
can
understand
the
state
and
behavior
of
the
real
world
team,
but
also
you
can
use
it
to
predict
the
behavior
of
the
real-world
twins
and
the
data
we
can
include,
such
as
statics
data,
nano
topology
data
performance,
metric
data,
inventory
data
and
not
data
and
so
number
three
mapping.
Actually,
we
are
establishing
mapping
between
the
physical
network
and
the
virtual
twin
network.
D
D
So
sample
application
scenario
actually
so
for
digital
twin
network
that
use
digital
trains
that
are
used
in
the
industry.
Actually,
we
use
a
network
as
a
tools
but
for
digital
twin
networks
that
are
used
in
the
nano
field.
Actually,
we
use
the
network
as
as
a
source,
a
data
source,
so
we
give
a
four
typical
example:
the
first
one
network,
maintenance,
engineer,
training
actually
usually
for
network
management.
We
may
involve
the
human
to
to
do
the
nano
maintenance
actually
for
some
of
the
tasks.
D
Actually,
you
may
really
require
some
expertise
and
and
to
get
these
expertise.
Actually
you
really
want
to.
You
know,
take
time
to
to
learn
from
it
and
to
make
sure
you
can
reach
the
level
that
expected
and
with
digital
team
platform,
we
really
can
help.
You
train
this
network
maintenance
engineer
to
make
sure
they
can
meet
some
criteria
and
reach
the
level
we
expect
for
specific
expertise.
D
So
the
second
is
machine
learning
training.
So
traditionally,
when
we
do
the
machine,
learning
training,
we'll
get
the
data
center
and
we
change
this
machine
learning
algorithm
in
the
lab
and
then
we
can
move
to
the
production
stage.
So
it
is
also
not
automatic,
and
maybe
back
and
forth,
when
do
the
lab
data
training
has
some
bag,
and
so
with
digital
twin
platform,
we
really
can
automate
this
process
and
help
to
to
provide
a
more
efficient
network
management,
more
more
efficient
than
machine
learning.
D
So
third
scenario
is
we
call
the
dev
orbs
orange
certification
usually
develops
actually
will
provide
life
cycle
of
software
software
development,
including
planning,
building,
testing
deep
deploy,
and
what
is
missing
is
verification.
D
So
for
some
of
the
configuration
changes
you
need
to
make
sure
whether
these
configuration
changes
can
be
applied
actually,
so
deborah
also
actually
lacks
verification
in
this
life
cycle
management.
So
so,
with
the
digital
team
platform,
we
really
can.
You
know,
leverage
this
platform
to
verify
this
configuration
to
make
sure
all
the
valid
all
the
validated
update
can
be
applied,
and
so
the
last
one
we
call
the
network
fusion.
When
you
introduce
some
new
network
api
or
we
introduced
some
new
protocol
stacker
you
before
they
move
to
the
production
stage.
D
You
really
need
to
do
the
test
and
to
do
the
back
fix
backfix.
Actually,
we
can
leverage
the
digital
platform
to
automate
this
process
and
can
can
provide
a
more
efficient.
This
network
backfixing.
D
D
So
when
we
look
at
this
job,
actually
we
we
discussed
this-
we
we
think
this
is
a
very
useful
work
actually,
but
we
identify
several
issues.
We
need
to
reach
agreement
between
also
also,
we
want
to
solve
solicit
feedback
from
the
mit
community.
The
so
first
is
issue
one
how
how
data
is
different
from
the
model.
D
Actually,
as
we
know,
the
in
the
digital
twin
network
composition,
we
actually
identify
five
key
elements,
so
in
industry,
digital
between
entities,
actually
data
and
model
are
actually
apparently
two
common
and
separated
components,
for
example
in
the
smart
manufacturers
in
a
smart
factory.
Actually
we
can
define
the
data
and
model
separately
and,
of
course,
actually
data
and
the
model.
Actually
they
need
to
be
used
together.
Actually,
data
can
be
structured
to
follow
a
set
of
well-known
data
model
requirements,
and
so
in
a
digital
twin
network,
we're
proposing
this
draft
and
data
model.
D
We
we
think
also
the
first,
let's
take
a
look
at
their
definition
for
data.
Actually,
it
is
digital
twin
data.
Actually,
they
can
use
to
represent
to
understand
the
state
and
the
behavior
of
the
real-world
thing
and
for
model.
Actually,
we
identified
two
kind
of
models.
The
first
is
computation
and
analytics
model,
and
that
can
be
described
not
only
describe
understand
between
operation,
state
and
behavior,
but
also
can
predict
the
twins
operations
data.
The
second
model
we
identify
is
used
to
prescribe
the
action
based
on
service
logic.
D
One
of
the
typical
example
is
a
policy
related
model
or
like
a
event,
condition
action
model.
Actually
they
can
provide
prescribe
this
kind
of
action
and
so
based
on
this
definition,
analysis
we
think
actually
data
and
model
should
be
separated
and
data
is
a
colostrum
called
colorstone
for
constructed
digital
twin
system
for
model.
Actually,
this
can
be
seen
as
a
source
to
analyze
the
diagonals
immune
and
control
the
physical
network.
D
D
D
So
we
try
to
figure
out
how
this
auction
is
different
from
the
other
key
elements
for
the
model.
Actually,
one
model
role
is
you
know
to
prescribe
that
action
based
on
service
logic?
So
now
we
can
see
actually
for
orchestration.
They
also
can
be
broken
down
into
two
rows.
The
first
row
is
they
can
control
digital
to
nano
environment
and
its
components
to
derive
required
behavior.
The
second
is
it's
a
main
role
for
the
aux
tray
component.
D
Actually,
they
will
provide
life
cycle
management
for
all
these
components
and
it
can
provide
repetitive
and
it
can
provide
reproductivity
and
based
on
the
definition
we
will
give
here
and
so
since
there's
some
over
laughing
with
the
model
role
we
mentioned
in
the
previous
slides
that
we
need
to
figure
out
how
to
address
this
kind
of
overlapping.
C
C
D
Try
to
be
quick,
actually
we'll
move
to
the
next
one,
so
we
also
have
three
other
issue
and
we
already
discussed
with
the
ulcer
and
we
actually
like
to
solve
this
feedback
for
one
of
these.
How
should
the
interface
should
be
defined?
We
identify
open
standalone
phase
and
internal
interface,
so
our
conclusion
is:
we
can
start
from
this
open
and
standard
interface
and
later
on,
we
can,
you
know,
revisit
this
internal
interface.
D
Next,
so
the
next
issue
is
for
one
of
the
elements
actually
who
responds
for
the
checking
difference.
Actually,
we
think
it's
a
mapping
and
then
we
support
two
kind
of
mapping.
One
is
one
two
one
mapping
the
second
one
too
many
mapping
the
key
difference,
the
actually
for
one
two
one
mapping
is
between
physical
network
and
digital
twin.
They
require
continuous
flow
data
exchange.
D
Actually
this
is
huge
model
data
exchange,
but
for
one
too
many
mapping,
actually
the
just
you
know
occasional
data
exchange
between
the
two
digital
twin
network
yeah.
Next.
D
So
last
one
is
the
continuous
verification
is
we
propose
in
this
chapter
how
this
is
different
from
ci
cd?
Actually,
as
we
know,
there
also,
you
know,
provide
the
whole
life
cycle.
Management
include
ci
cd,
and
we
give
the
example
now
based
on
our
evaluation.
We
think
cv
is
extension
of
david,
also
ci
cd.
One
of
the
example
we
give.
Actually,
you
know,
relate
to
the
developer,
oriented
certification
actually
because
it
lacks
a
continuous
verification.
It
will
end
up,
increase
the
risk
to
deploy
no
valid
data
update,
but
with
continuous
verification.
D
D
Try
to
you
know
make
it
solid
in
in
in
the
introduction
and
an
abstract,
and
also
we
will
further
articulate
the
relation
between
the
data
model
mapping
and
also
we
analyze
the
requirements
from
the
ping
and
application
interface.
And
so
we
like
to
hear
your
feedback
and
will
address
any
issues
in
the
meeting.
Thank
you
for
listening.
C
E
E
So
I
just
want
to
refresh
and
why,
because
last
time
there
were
a
lot
of
questions
around
it.
So
I
just
wanted
to
repeat
very
high
level
what
the
goals
overall
for
the
nmrg
and
what
was
proposed.
E
So
the
enemy
goals
were
to
agree
on
intent,
related
terminology
and
classification
and
provide
a
foundation
for
future
discussions
related
to
the
intent
top
intent
and
where
all
participants
have
the
common
understanding.
So
it
is
really
about
the
sharing
of
terminology
and
the
concepts.
So
originally
the
two
nmrg
drafts
were
proposed
to
address
this
goal.
E
E
So
the
scope
of
this
draft,
there
was
no
clarity
originally
about
what
intent
represents
for
different
stakeholders.
So
so
we
had
the
concepts
graphed
that
gave
some
high
level
concept
definition
of
the
intent
and
it
also
presented
the
overview
of
intent.
Functionality
intent
networking
functionality,
but
that's
not
enough.
We
needed
really
to
give
more
information
and
to
to
do
some
classifications
of
different
intent
types
in
order
to
understand
what
intent
represents
to
different
stakeholders
and
those
stoic
holders
could
be
network
operators,
administrators,
end
users,
customers,
etc.
E
There
was
also
no
common
understanding
how
to
classify
intents
and
what
types
of
intents
exist.
There
were
different
types
of
intents
being
used
in
different
scenarios,
and
there
was
no
kind
of
com
common
taxonomy
there.
So
this
draft
addresses
these
issues
by
proposing
intent,
taxonomy
and
methodology.
E
E
So,
as
you
know,
we
propose
the
methodology
that
could
be
used
to
deliver
this
to
to
generate
this
intent,
taxonomy
and
also
to
extend
it
and
customize
it.
So
we
explained
this
many
times
previously
and
there's
a
lot
of
comments
to
add
an
example
and
therefore
we
added
an
example
which
was
the
itf-108
poc,
a
multi-layer
approach
for
ibm,
barbara
walter
and
the
others,
and
it's
it's
been
used
as
an
example
for
kind
of
how
the
classification
methodology
could
be
used.
E
So
for
those
who
are
familiar
with
this
book,
they
have
two
types
of
intents:
they
have
a
slice
intent
and
they
have
service
chain
intent
and
after
discussion
with
them,
we
determined
yes,
this
their
poc
could
be
used
for
carrier,
but
also
for
data
center.
So
then
we
identified
what
type
of
solutions
or
scenarios
what
type
of
intent
user
types,
what
type
of
intent
types?
What's
the
scope
of
the
intent?
What's
the
network
scope
for
the
intents,
how
they're
abstracted?
Are
they
technical
non-technical?
E
E
We
use
the
kind
of
different
symbols,
different
colors
for
carry
and
data
center
and
where
these
intents
fit
into
the
overall
taxonomy
and
classification
next,
so
we
really
want
to
acknowledge
everyone
who
reviewed
suggested
commented.
You
know
proposed
text
for
this
draft.
There
were
many
people
who
helped
and
who
gave
their
input,
mahdi
ibrahim,
lawrence,
alexander
yahya,
jeremy
pedro,
daniel
branislav,
here
again
jerelyn
jeff.
E
We
also
thank
barbara
voldemort
david
for
the
contribution
and
for
providing
the
multi-level
approach,
poc,
which
he
used
as
a
use
case
for
intent,
classification
as
a
sample
for
the
use
case,
intent
classification,
since
we
shared
the
the
last
the
the
version
of
this
draft
version.
Two.
We
also
got
review
comments
from
mauka,
david
and
benoit.
Thank
you
all
sorry.
This
was
written
before
when
I
contributed.
So
I
will
add
him
to
the
acknowledgements
next.
E
So
the
following
are
the
issues
that
we
resolved
in
this
draft.
So
we
sharpened
our
drafts
position.
There
were
38
comments
in
total,
but
this
is
just
kind
of
regrouped
them
into
eight
high
level
categories,
just
to
explain
what
we
did
in
the
version
zero
two.
So
we
sharpened
our
draft's
position
in
relation
to
the
concepts
draft.
E
Then
we
provided
detailed
description
of
the
intent
classification
workflow
and
how
we
can
use
the
methodology
to
extend
the
taxonomy
classification.
Then
we
integrate
it.
As
I
mentioned
the
epoch
into
the
draft
and
use
it
as
an
example
for
our
classification,
there
was
some
requirement
to
clarify
further
requirements
for
different
intent
types
based
on
the
context.
We
added
that
we
addressed
the
benefits
of
intents
to
different
network
requirements.
E
We
added
the
scope
section,
so
we
moved
some
text
there
and
we
kind
of
identified
the
scope
and
priorities
for
the
project
and
included
definition,
section
introducing
different
terms
related
to
ibn
id
and
with
reference
to
ibm
concepts
and
overview.
And
then
we
had
the
various
readability
improvements,
but
we
did
receive
some
further
comments
that
we
would
address
in
next
few
days.
Next.
E
So
then
we
asked
for
drg
last
call
and
then
shepherd
has
been
assigned
since
the
last
itf
meeting
laurent
from
has
been
assigned
as
shepherd,
and
the
three-week
rg
last
call
was
initiated
on
23rd
of
february.
So
we
are
now
collecting
the
comments
and
I
already
mentioned
that
we
received
some
comments
and
we'll
be
receiving
them
until
15th
of
march
next.
E
E
We
we
believe
this.
This
document
is
very
stable
and
it's
ready
for
rssg
review.
We
spent
a
lot
of
time
on
this
document
and
we
really
think
it
is
ready
now,
so
you
can
see
the
timeline
for
rg
so
next
steps.
We
believe
that
the
authors
quarters
believe
all
reported
issues
are
resolved
at
this
stage.
We
will
continue
collecting
comments
from
the
group
by
from
the
research
group
by
15th
of
march.
E
E
C
A
Going
into
the
queue,
but
just
for
the
sake
of
the
other
presentation-
and
this
is
a
anyway
research
group
document
that
is
under
research
for
classical
so
please
and
enter
those
comments
offline.
Please.
E
A
F
F
F
So
for
simplicity,
we
will
use
an
mi
instead
of
network
measurement
intent
and
the
major
components
concludes
the
the
the
third
part
and
the
sequential
relationship
between
the
components
is
shown
as
a
red
figure
and
next
slide.
F
F
They
have
the
ability
to
identify
the
nmi
of
a
certain
network
performance
that
users
want
to
measure
such
as
delay,
jitter
and
so
on.
At
the
same
time,
allow
users
to
express
the
nmi
of
network
performance
in
a
variety
of
interactive
ways
to
ensure
the
accuracy
of
the
identification
of
the
nama
and
in
nmi
translation.
F
So
in
nmi,
data
collection
and
analysis
should
be
based
on
the
selected
environment
scheme
and
the
content
to
be
married.
That's
determined
in
the
previous
steps
automatically
realize
the
collection
on
the
mind
and
generate
corresponding
data
analysis
results.
F
F
Well,
if
the
transmission
frequency
is
too
slow,
some
instantaneous
network
anomalies
will
be
missed
and
the
network
status
cannot
be
accurately
reflected.
So
in
order
to
accurately
measure
the
network,
the
network
stage,
and
especially
the
abnormal
network
affecting
the
business,
we
are
occupying
the
network
bindwest
as
little
as
possible
and
taking
into
account
the
low
processing
capacity
of
the
data
analysis
system.
F
The
network
delay
between
different
thresholds
represents
the
different
status
of
the
network
and
the
business
when
delay
when
the
delay
value
is
blowing
warning.
The
network
and
the
business
are
both
normal
and
when
the
delay
is
between
warning
and
alert,
it
represents
that
the
network
is
normal,
but
the
business
is
that
it
represents.
The
network.
Fluctuation
is
abnormal,
but
the
business
is
normal
and
when
the
delay
exceeds
the
large
value,
both
the
network
and
business
are
abnormal.
F
But
when
the
network
delay
exists,
warning
value
but
is
lower
than
the
alert
volume,
passive
environment,
samples,
60
percent
of
business
data
and
the
transmission
message,
frequency
of
the
active
environment
is
adjusted
to
the
median
value
and
the
running
stage
stage
of
some
key
devices
in
the
network
is
collectives
and
when
the
network
delay
is
less
than
warning
volume,
passive
environment
data
is
sampled
at
20
percent
and
the
active
environment
message.
Frequency
is
adjusted
to
the
lower
lowest
and
the
network
equipment.
Running
stage
of
keynotes
can
be
collected
as
needed.
C
F
And
this
is
the
sla
performance
with
the
environment
with
the
nmi
concrete
times,
and
it's.
F
A
Presentation
on
the
research
group
documents,
we
will
ask
you
to
handle
any
comments
or
questions
on
the
mailing
list
or
with
the
authors
please.
A
A
G
There
have
been
several
studies,
including
our
work
targeted
at
requirement
definition
to
deployment
phases,
but
the
number
of
studies
for
testing
evaluation
phase
is
limited,
especially
for
networking
level
evaluation.
Meanwhile,
as
all
of
you
may
know,
intent-based
networking
ibn
is
a
concept
proposed
by
cisco
in
which
the
network
system
is
managed
based
on
user's
abstract
intents.
G
In
this
study,
we
target
the
automation
of
testing
phase
by
following
the
concept
of
ibn
next,
please,
before
introducing
today's
topic,
let
me
introduce
ongoing
project
of
our
research
group
in
nec.
We
have
proposed
and
developed
weaver,
which
is
an
ai
empowered
network
system.
Designer
weaver
accepts
an
abstract
intent
as
an
input
and
recursively
refines
the
abstract
part
of
the
intent
and
generates
huge
number
of
system
design
candidates.
H
G
G
G
G
G
Please,
therefore,
our
goal
is
to
automate
such
network
level
testing
phase
of
si,
especially
for
ibm
based
systems.
Today
we
propose
an
automated
generation
method
of
evaluation
program.
Our
method
accepts
a
set
of
system
requirements
and
system
design
derived
from
requirements
and
generates
a
program
compatible
for
the
system
design.
Despite
the
difference
of
os
or
network
configuration.
G
G
First,
let
us
explain
the
model
settings
in
this
study.
We
model
both
a
system,
design
and
system
requirements
as
directed
graphs
composed
of
the
set
of
nodes
and
edges
a
system.
Design
is
a
concrete
network.
Configuration
and
system
requirements
are
abstract
expression
of
the
system,
design,
evaluation
units
which
are
related
to
user's
intents,
are
expressed
as
abstract
nodes
or
edges
system.
G
This
is
the
outline
of
the
proposed
method.
Our
method
accepts
a
set
of
system
requirements
and
system
design
to
generate
a
correct
evaluation
program,
parameter
search
on
the
system.
Design
graph
runs
for
each
evaluation
unit
by
referring
to
evaluation
templates
evaluation
templates,
define
abstract
commands
and
search
methods
for
parameters
of
these
commands.
G
G
G
Okay,
after
the
parameter
search
step
finished,
the
abstract
commands
with
parameters
are
translated
into
the
conflict
evaluation
scripts
by
referring
to
command
templates.
Each
command
template
defines
mappings
from
an
abstract
command
to
order
tuples
of
executable
scripts
and
their
agents.
Each
tuple
of
a
script
and
an
agent
has
conditional
branches
based
on
the
input
parameters.
G
For
example,
if
the
type
of
client
os
is
ubuntu,
these
red
scripts
are
chosen
and
acquired
parameters
are
substituted
to
the
syntax.
Finally,
evaluation
program
is
generated
as
a
list
of
evaluation
scripts
for
all
evaluation
units.
Each
script
is
sequentially
executed
by
the
agent
and
some
evaluation
scripts
for
performance
measurement.
Compare
the
results
from
predefined
constraints.
G
G
To
show
the
effect
of
our
method,
we
conducted
experiments
to
verify
evaluation
programs
generated
by
proposed
method
in
our
prototype
implementation
of
the
method.
Evaluation
programs
are
generated
as
workflow
of
evaluation
scripts,
which
can
be
executed
by
ansible.
We
deploy
the
system
design
on
an
openstack
virtual
environment.
G
First
and
the
left
hand,
side
of
this
figure
is
the
system
requirements
for
connection
between
applications.
In
this
example,
an
http
connection
from
f1
to
f2
and
minimal
bandwidths
of
500
megabps
are
required
and
the
right
hand
side
is
the
system
design
derived
from
the
requirements
in
this
system.
Both
the
client
application,
app1
and
the
server
application
app
2
run
on
different
ubuntu
os
nodes
and
ngx
is
used
as
an
example
of
app2.
G
G
G
G
G
Here's
the
evaluation
results,
as
you
can
see
in
the
table,
and
the
results
for
bandwidth
update
up
turn
to
fail
due
to
the
shortage
of
bandwidth
performance.
This
throughput
reduction
is
because
of
the
background
traffic
generated
by
additional
vms
and
performance
reduction
of
ubuntu
2
node
by
processing
file
transfers.
G
G
G
We
modeled
system
requirements
and
designed
as
graph
structures
and
the
parameter
search
algorithms
run
in
a
system
design
graph
to
acquire
parameters
of
scripts
for
future
works.
We
are
planning
to
expand
our
parameter
search
algorithm,
so
that
parameter
can
be
acquired
in
much
more
complex
topology.
A
I
was
saying
thank
you
kazuki
for
the
presentation.
I
think
we
have
time
for
one
question.
Sorry,
I
have
some
issue
with
the
mythical
setup
here:
apology
for
that.
A
Okay,
if
there
are
no
questions
from
the
room
katsuki,
thank
you
for
for
presenting
your
work
from
cnsm
to
to
dnamergy.
We
will
surely
come
back
to
you.
I
mean
I
have
in
fact
more
than
one
question,
so
I
will
come
back
to
you
offline,
because
I
think
some
of
the
the
aspect
that
you
are
using
as
techniques
could
be
very
useful
in
the
context
of
internet-based
networking,
and
I
would
like
to
hear
more
from
you,
so
I
I
will
follow
up
by
email.
Thank
you.
I
Okay,
this
is
shayan
and
mai,
I'm
a
phd
candidate
at
lancaster
university
and
in
this
research
I
would
like
to
discuss
how
intents
can
be
used
as
a
communication
mechanism
between
service
consumers
and
service
providers.
Next,
please.
I
I
I
So
here's
the
outline
of
my
talk
today.
First,
I
will
discuss
the
main
challenges
with
the
existing
intent
based
solutions
and
the
limitations
with
the
related
work,
and
then
I
will
propose
our
intent
based
framework
and
expressions
later,
I
will
discuss
our
preliminary
preliminary
results
in
the
context
of
a
cloud
cdn
use
case
and
intent
refinement.
I
Finally,
I
will
conclude
and
discuss
our
future
work
next,
please,
before
I
start,
I
think
it's
important
just
to
differentiate
between
intents
and
policies
as
we
see
it,
so
policies
are
prescriptive
rules
that
determine
what
kind
of
actions
to
take
under
different
circumstances
and
they're,
usually
articulated
by
system
experts
like
network
admins,
whereas
intents
are
declarative
expressions
that
allow
users
to
express
the
desired
outcome
at
a
higher
level,
so
they
can
also
be
used
by
non-technical
users
as
well.
I
Since
the
development
of
intent
based
mbis
today
remains
in
its
infancy,
there
are
several
challenges,
so
the
first
challenge
is
that
most
of
the
existing
intent-based
solutions
are
designed
for
network
experts
and
therefore
they
provide
prescriptive
intent,
expressions
rather
than
declarative
ones,
and
in
our
research
we
wanted
to
focus
on
generic
service
consumers
who
do
not
necessarily
acquire
technical
knowledge.
Therefore,
they
need
a
generic
declarative
expressions
rather
than
the
existing
prescriptive
ones.
I
The
second
challenge
is
the
translation
process
between
the
declarative,
intent,
expressions
and
a
form
that
is
understood
by
the
underlying
system,
but
we
think
it's
important
to
have
an
intermediate
level
of
translation
where
we
break
down
or
decompose
these
declarative
intents
into
a
set
of
abstract
policies
that
are
technology,
agnostic
and
the
reason
to
do
so
is
to
provide,
of
course,
more
flexibility
and
reusability.
I
H
I
So
here
are
some
of
the
limitations
with
the
existing
related
work,
so
most
of
the
existing
intense
solutions
are
limited,
they're
ad
hoc,
some
of
them
are
vendor-specific
and
since
most
of
these
works
mainly
focus
on
the
networking
and
nfv
domains,
so
the
intense
expressions
that
they
provide
are
considered
to
be
prescriptive.
I
So
in
our
case
we
need
to
have
generic
and
declarative
intent,
expressions
that
are
beyond
the
network
domain.
Moreover,
most
of
the
current
works
do
not
really
provide
the
means
or
the
tools
to
create
new
intents
and
map
them
to
lower
level
policies.
Next,
please.
I
So,
to
address
the
aforementioned
gap,
we
have
proposed
an
intent-based
mbi
framework
and
expressions,
so
our
framework
consists
of
three
layers.
The
first
layer
is
the
northbound
interface,
which
connects
the
service
consumer
and
service
provider
via
their
corresponding
apis,
and
it's
responsible
of
mapping
these
intents
to
their
equivalent
policies.
I
I
So
I'll
discuss
our
proposed
generic
declarative
intent
expression,
so
it's
expressed
in
terms
of
service
resources
conjunction
for
readability
and
target,
so
this
can
be
decomposed
by
the
intent
developer
or
the
service
provider
into
a
set
of
prescriptive
policies
that
are
connected
using
logical
operators
like
and
and
or,
and
each
policy
could
be
expressed
in
terms
of
conditions,
actions,
constraints
and
optionally
priority.
Of
course
we
need
to
maintain
the
mapping,
between
these
intents
and
their
corresponding
policies,
as
you
can
see
in
the
table
below
next,
please
so
for
demonstration
purposes.
I
I
will
discuss
our
preliminary
preliminary
results
in
the
context
of
a
cloud
cdn
or
a
content
delivery
network
use
case.
So
in
this
situation
the
service
consumer
would
be
the
content
provider
who
wants
to
cache
their
content
in
a
cdn,
and
the
service
provider
would
be
the
cdn
operator
who
is
managing
the
cdn.
I
So
today,
cloud
cdns
do
not
really
allow
content
providers
to
express
their
high
level
intents.
So
in
our
solution
we
would
like
to
have
content
providers
expressing
their
intent,
so
they
can
say
something
like
hey.
I
want
caching
for
content
x
to
handle
20
gigabytes
per
minute,
so
in
this
case
the
target
in
this
intent
is
to
hit
this
specific
workload.
This
has
to
be
decomposed
by
the
cdn
operator
into
a
set
of
prescriptive
policies.
I
So
in
this
figure
here
we
have
shown
how
the
workload
has
been
achieved
throughout
the
day
by
varying
the
caches
cluster
size,
which
means
adding
or
removing
caches
according
to
the
demand.
So
the
baseline
is
a
traditional
cloud,
cdn
behavior
that
reactively
scales
the
cluster,
and
we
have
compared
this
against
our
intense
solution.
In
two
cases,
the
first
case
is
an
underestimated
intent,
workload
and
the
other
one
is
a
well
estimated
intent.
I
So,
in
both
intent
cases
we
have
achieved
better
workload
than
the
the
baseline,
but,
of
course,
with
a
well
estimated
intent,
we
can
get
even
better
results
with
the
workload.
Next,
please.
I
I
Yes!
So
in
this
figure
we
have
compared
the
caches
cluster
resizing
behavior
in
these
three
scenarios,
so
in
a
no
intent
scenario,
which
is
the
baseline,
that
does
reactive
scaling
of
the
cluster
and
then
we
have
compared
this,
against
our
refined
intents,
both
greedy
and
conservative.
I
Of
course,
this
comes
at
additional
cost
of
running
up
time,
so
this
this
has
to
be
tuned
by
the
cdn
operator.
Next,
please.
I
So
in
this
figure
we
have
compared
the
workload
in
these
three
scenarios,
so,
with
the
refined
intents
we
were
able
to
achieve
more
workload.
Of
course,
the
greedy
approach
was
the
best
and
then
the
conservative
one,
and
the
last
of
course
was
the
the
baseline,
which
is
a
no
intent
scenario.
Next,
please.
I
So,
to
conclude,
we
have
discussed
the
limitations
of
the
current
intent-based
solutions
and
therefore
we
have
proposed
an
intent-based
mbi
framework,
along
with
some
declarative,
intent,
expressions
and
their
corresponding
prescriptive
policies.
We
have
also
demonstrated
a
caching
intent
with
a
workload
target
and
its
corresponding
policies.
We've
also
discussed
some
possible
intent
refinements
in
a
cloud
cdn
use
case.
Next,
please.
I
So,
as
part
of
our
future
work,
we
plan
to
extend
the
current
static
mapping
between
intents
and
policies
to
have
a
dynamic
one
based
on
several
criteria
and
map
them
to
existing
microservices.
I
A
A
Yeah,
I'm
back
sorry
for
that,
just
like
to
have
if
you
can
expect
or
give
us
some
figures
about.
The
that
you
have
is
a
different
approach.
A
C
I
C
Directly
because
I
think
you
have
some
issue
yeah,
so
thank
you
again,
and
so
we
will
now
move
to
the
to
the
next
speaker.
There
is
no
other
question
now.
There
is
one
question,
maybe
I'm
not
sure.
J
Hi,
it's
phil
here
it
was
a
nice
talk.
Thank
you.
I
was
just
wondering
in
terms
of
your
example
with
the
cloud
cdn.
J
How
did
you
you
know?
You
talked
about
the
the
intent
there
was.
Was
that
something
you
you
talk
with
with
those
sorts
of
providers
about
what
kind
of
intent
they
would
like,
or
this
is
just
sort
of
an
example-
a
working
example
that
you
that
you
kind
of
decided
what
you
think
looks
sensible
things
to
pick.
I
Yeah
so
now
we
haven't
talked
to
anyone
really,
but
we
have
looked
at
the
existing
cloud,
cdns,
the
popular
ones
like
google
cloud
like
aws,
etc,
and
you
have
noticed
that
they
don't
really
provide
content
providers
with
this
kind
of
flexibility.
So
this
was
one
of
the
examples
that
we
thought
about.
I
Actually,
it
was
inspired
by
one
of
the
works
that
we
have
read
about,
which
was
talking
about,
futuristic
cdns,
where
they
would
like
to
allow
content
providers
to
express
higher
level
requirements,
but
that
was
not
really
formulated
or
encapsulated
in
the
form
of
intents
and
therefore,
we
decided
to
actually
build
up
on
that
idea
and
adopt
the
intense
solution
in
this
context.
C
Okay,
so
thank
you
again
and
we
will
now
continue
with
the
next
presentation,
so
I
think
it's
joe,
I
don't
know
if
lauren
can
can
share
the
screen.
I
think
he
has
some
issue,
so
let
me
open
the
presentation,
maybe.
A
Hello,
can
you
hear
me
yes
laura?
We
can
hear
you.
Okay,
so
joe
will
share
the
scream
itself
because
he
has
a
video
demo
embedded
so.
H
H
H
Okay,
so
hello,
everybody,
my
name
is
joseph
mcnamara,
I'm
a
phd
student
studying
under
an
irish
research
council
funded
enterprise
partnership
scheme
with
atlan
institute
of
technology
and
ericsson.
I
will
be
presenting
my
recent
work
in
the
area
of
intent
in
an
adaptive
policy
environment,
along
with
a
demonstration
of
intent
in
action.
H
H
Next,
I
will
describe
the
architecture
of
the
system,
along
with
the
usage
scenario,
to
be
demoed
later
in
the
presentation,
and
next
we
will
describe
the
three
primary
steps
in
how
the
system
handles
intent.
This
is
performed
in
order
of
intent,
generation,
intent,
comparison
and
intent
resolution.
H
H
H
If
no
conflicts
are
detected,
the
intent
is
added
to
a
collection.
If
a
conflict
is
detected,
the
intent
is
rejected
and
the
issuer
is
notified.
The
addition
of
a
new
intent
to
the
collection
triggers
a
reconfiguration
of
the
network,
where
the
intent
collection
defines
the
scope
for
acceptable
network
parameters
in
the
next
slide.
We
will
provide
more
details
of
this
process.
H
H
In
this
example,
who
describes
the
user?
What
is
a
generic
label
for
the
intent
when
describes
a
time
frame
for
the
intent
to
be
active?
Where
describes
a
condition
for
the
intent
setting
bandwidth
to
less
than
one
megabyte
per
second
and
how?
Which
is
included
for
the
example
but
is
set
to
null
as
it
does
not
play
a
role
in
the
usage
scenario,
the
intent
message
is
received
by
the
policy
engine
where
it
is
paris
into
an
internal
path,
collection.
H
In
this
slide,
we
detail
the
intent
comparison
step.
The
intent
message
has
been
processed
by
the
policy
engine
into
an
internal
path
collection.
Each
path
through
the
tree
can
be
easily
compared
to
branches
of
different
intents.
Using
this
approach,
relationships
are
easier
to
identify
on
a
structural
levels.
H
H
To
compare
attributes,
we
must
first
know
what
they
are
and
how
they
behave.
This
resulted
in
the
incorporation
of
a
dictionary
with
the
policy
system.
The
dictionary
enables
the
identification
and
predefined
elements.
Sorry,
the
dictionary
enables
the
identification
of
predefined
elements
such
as
bandwidth
and
provides
a
predefined
blue
pin
blueprint
for
the
intent
element
to
be
mapped
to
mapping
intent
elements
provides
functionality,
adjusting
our
action
to
generate
responses
that
are
aligned
with
all
current
active
intents
in
the
system,
in
this
case,
as
more
intents
are
introduced
to
the
system.
H
The
graph
shows
outputs
similar
to
the
demonstration
you're
about
to
see.
In
this
scenario,
we
implement
three
straightforward
intent
messages.
The
first
intent
message
requests
a
bandwidth
of
one
megabyte
per
second.
This
is
more
than
necessary
for
the
video
we
are
streaming
over:
the
mini
network
to
generate
traffic.
H
The
second
intent
message
requests
a
ban
width
of
about
375
kilobytes
per
second,
as
this
intent
does
not
conflict
with
the
already
existing
intents
of
the
system
has
been
validated
and
has
triggered
a
reconfiguration
of
the
network.
The
third
intent
message
requests
a
bandwidth
of
about
190
kilobytes
per
second,
which
is
compared
to
the
two
already
existing
intents
and
triggers
a
third
reconfiguration
of
the
network.
H
H
H
H
H
Later
I
can
skip
ahead
now,
just
to
save
time
so
after
streaming
for
a
few
seconds
we're
going
to
introduce
a
new
intent
into
the
system,
reducing
the
bandwidth
of
the
configured
network,
the
new
intent
triggers
a
reconfiguration
and
starts
another
video.
H
H
A
A
Okay,
so
maybe
let's
see
I
will
revoke
the
screen
sharing
for
for
a
moment
just
to
triggering
I'm
not
sure
there
is
any
other.
A
E
A
E
A
A
B
It's
not
full
screen,
but
I
don't
know
okay,
great
so
yeah
hello,
thanks
again
for
inviting
me,
I'm
stefan
schneider
from
paderborn
university
and
I'd
like
to
present
our
work
on
self-driving
network
and
service
coordination,
using
deep
reinforcement
learning
so
as
to
previous
talks.
This
is
also
based
on
our
paper
at
cnsm
last
year
and
I'll
try
to
give
a
reduced
version
of
our
conference
talk
and,
in
the
end,
focus
on
challenges
of
applying
ai
for
network
management
yeah.
So
let's
get
started
next
slide.
Please.
B
B
In
addition
to
these
services,
we
have
our
network,
that's
depicted
here
by
the
clouds
which
consists
of
distributed
but
interconnected
nodes,
where
each
node
may
have
some
kind
of
compute
resource
limited
compute
resource,
and
then
we
have
our
users
that
want
to
use
these
services.
In
order
to
provide
these
services
to
our
users,
we
need
to
scale
the
service
components,
place
instances
of
these
components
at
the
different
nodes
and
then
schedule
the
rapidly
incoming
flows
from
these
users
to
these
different
instances.
B
B
If
I
see,
for
example,
exactly
if
that
the
load
at
the
ingress
node
is
becoming
too
high,
then
I
can
scale
out
this
component
c1
sorry
and
place
the
second
instance
of
c1
at
the
neighboring
node
and
then
schedule
some
part
of
the
traffic
to
the
second
instance
to
push
the
load
away
from
the
ingress
node
to
the
neighboring
node
and
of
course,
in
this
scenario
nothing
is
static.
B
So
this
whole
scenario
isn't
used,
there's
a
lot
of
existing
work
actually,
but
when
we
looked
at
existing
work,
we
found
that
there
are
often
three
major
limitations
of
applying
this
existing
work
to
practice,
and
the
first
is
that
of
often
existing
work
focuses
on
mid
to
long-term
planning
per
deployment
request,
assuming
that
they
have
some
some
expectat
expected
load
off
or
some
expected,
you
know
load,
and
then
they
run
some
algorithms
place
instances
hardwire
them
to
the
ingress
nodes,
and
you
hope
that
it
works.
B
But
the
problem
is
that
operational
reality
often
diverges
from
such
initial
plans
if
the
load
is
different,
for
example,
and
so
we
really
focused
on
the
rapidly
incoming
user
flows
here
and
on
scheduling
these
flows
at
real
time
and
adjusting
our
scaling
placement
dynamically.
So
it's
a
bit
more
fast
paced
and
that
dynamic
here
the
scenario.
B
But
if
the
scenarios
change
and
the
underlying
assumptions
no
longer
hold,
then
they
easily
break,
or
at
least
don't
work
as
well
anymore,
and
then
again
we
need
these
experts
to
sit
down,
understand
the
problem
and
fix
the
approaches,
and
so
what
we
would
rather
have
want
to
have
is
an
approach
that
self-adapts
to
news,
sorry
to
new
scenarios,
to
new
objectives
and
all
that
without
human
intervention,
without
expert
knowledge
and
then
lastly,
existing
approaches
often
assume
global,
up-to-date
or
sometimes
even
a
priori,
a
priori
knowledge
of,
what's
going
on
in
the
entire
network
and
for
large
networks
with
monitoring
delay.
B
This
is
not
very
realistic,
and
so
we
really
focus
here
on
partial
and
delayed
observations
that
could
be
realistically
available
through
monitoring
and
we
do
all
of
that
with
a
model-free,
deep
reinforcement,
learning
approach.
B
Next,
please,
yes,
so
here's
an
overview
of
that
approach.
On
the
left
side,
you
see
the
network
and
in
that
network,
every
a
computa
compute
node
has
a
scheduling
table,
and
this
or
the
scheduling
tables
basically
are
rules
that
are
applied
to
incoming
flows
locally
at
runtime
and
then
on
the
top
right
corner.
B
You
see
our
rl
agent
or
reinforcement
learning
agent
that
periodically
monitors
the
network,
gets
information
about
what's
going
on
and
then
updates
these
rules
at
inside
these
scheduling
tables,
for
example,
to
change,
load,
balancing
change
to
scaling
or
placement,
and
this
repeats
iteratively
and
yeah
during
training.
The
agent
also
receives
a
reward
signal
that
indicates
how
happy
we
are
with
the
current
situation
such
that
it
can
learn
from
its
actions.
B
Next,
so
let's
have
a
bit
closer
look
at
what
these
scheduling
tables
look
like.
So
I
mentioned
these
tables
express
what
to
do
with
incoming
flows
based
on
which
services
flow
requests
and
which
component
within
that
service.
So,
let's
assume
a
flow
arrives
at
ingress,
node
v1.
Then
we
see
the
scheduling
table
here
and,
let's
assume
the
flow
requests.
B
The
first
component
s1
first
service,
s1
and
component
c1,
so
the
first
row
in
that
table,
then
with
10
percent
probability,
that
flow
is
processed
locally
at
node
v1,
with
40
probability
sent
to
the
neighbor
and
processed
there
and
with
50
it's
sent
to
node,
b3
and
process
there,
and
that's
how
we
decide
the
scheduling
or
yeah.
That's
how
the
scheduling
works
here,
but
we
also
derive
the
scaling
and
placement
automatically
from
the
scheduling
tables
in
one
joined
step
by
placing
instances
at
all
compute
nodes
where
flows
could
possibly
arrive.
B
B
B
Yeah,
so
our
you
know
again
overview
of
our
rl
framework
here
on
on
the
left
side.
Again
we
see
the
network
that
is
monitored
in
step,
one
and
the
monitoring
information
is
processed
by
an
adapter
that
retrieves
the
relevant
observations
calculates
the
reward
and
passes
that
to
our
deep
reinforcement,
learning
agent
in
step.
Two
and
our
drl
agent
is
built
on
ddpg,
so
deep
deterministic
policy,
grading
which
supports
such
large
continuous
action
spaces
and
which
is
an
actor
critic
approach.
B
So
in
step
three,
we
retrieve
the
next
action
from
the
actor
again
apply
that
or
pass
that
to
the
adapter.
That
then
applies
the
new,
updated
schedule
and
placement
to
the
network,
and
then
this
whole
cycle
repeats
and
it
repeats
a
lot
of
times
during
training
until
convergence,
so
because
there
are
so
many
iterations
during
training.
We
do
do
that
upfront
offline,
and
here
we
focus
on
exploration.
B
You
have
to
you,
know,
find
good
actions
and
then,
once
the
training
converges,
we
switch
to
online
inference.
So
then
we
don't
need
to
update
our
neural
network
anymore,
and
it's
really
fast
and
we
can
do
that
online
and
then
we
also
focus
on
exploiting
the
best
action
rather
than
x,
on
exploration.
Next,
please.
B
Yeah,
so
we
evaluated
this
approach
on
four
real
world
network
topologies
on
wearing
stochastic
traffic
patterns
and
also
on
real-world
traffic
traces
and
compared
it
against
three
algorithms
and
all
the
results
are
in
the
paper.
But
here
for
time
reasons
I
just
want
to
show
one
quite
representative
result
on
the
next
slide.
Please.
B
It
supports
optimizing,
multiple
different
objectives,
so
also
optimizing
delay
or
something
in
between
yeah
navigating
this
trade-off
between
multiple
objectives
and
it
scales
to
large
networks.
Next.
B
When
applying
I
for
network
management,
so
I
listed
some
of
the
challenges
that
I
think
that
we
solved
here
and
also
some
open
challenges
at
the
bottom
yeah.
I'm
not
sure.
If
I
should
go
through
this.
I
still
have
two
minutes
right,
yeah
I'll
briefly,
go
through
this
sorry.
So
when
we
started
the
this
research,
we
weren't
sure
how
to
approach
this
with
ai.
B
So
we
debated
whether
to
look
at
typical,
sorry,
typical,
supervised
ai
approaches,
but
we
saw
that
there's
a
lack
of
data
and
typical
regression
or
classification
didn't
seem
right
for
our
network
management,
so
we
decided
to
go
with
rl
and
we
still
had
to
select
a
suitable
rl
approach
and
we
went
for
ddpg
because
of
the
support
for
large
continuous
actions.
B
B
But
overall,
I
think
we
solve
these
challenges
and
it
works,
and
the
approach
does
self-adapt
to
different
scenarios.
It
does
scale
it
generalizes,
so
I
do
think
it's
an
important
step
towards
driverless
networks
in
practice,
but
I
am
also
aware
of
the
open
challenges
that
are
still
ahead
of
us
at
least
of
some
here,
so
I
do
think
we
need
more
standard
benchmarks
to
compare
and
measure
progress
in
the
area,
so
there
are
benchmarks
and
other
domains.
For
example,
video
games.
There's
this
atari
benchmark
for
robotics
with
machuco.
B
I
think
we
need
something
like
that.
We
also
need
to
think
about
how
to
bridge
the
gap
from
simulation
to
reality,
which
is
yeah,
non-trivial
non-trivializing,
and
there
we
need
to
think
about
safe
and
explainable
ai
about
robustness,
and
even
if
we
get
that
all
to
work
in
one
real
network,
we
still
need
to
make
sure
that
our
approach
is
generalized
and
can
learn
online
very
efficiently.
If
the
situations
change
in
these
networks
and
then,
ultimately,
I
think
in
this
approach,
we
focus
on
model
free
rl.
B
So
we
don't
need
expert
knowledge
and
that's
nice,
but
I
think
it
would
also
be
nice
to
take
and
leverage
our
existing
expert
knowledge.
Combine
it
with
ai
to
get
the
best
results
here,
and
I
think
here,
model
3
plus
model
based
could
work
so
still
a
lot
of
open
challenges.
A
A
C
If
you
have
time
for
for
a
quick
question,
maybe
you
know
if
it's
strong,
maybe
you
can
take
offline,
but
a
general
question.
I
think
you
thank
you,
stefan,
and
so
you
you
pointed
out
one
difficult.
You
have
several
iterations
in
particular
creating
the
the
rewriting
function.
C
B
That's
a
very
good
question,
so
I
think
there
are
a
few
things
you
or
rules
to
look
out
for
to
make
sure
to
include
the
relevant
metrics
to
make
sure
that
the
reward
is
given
regularly
so
to
notch
the
rlh
in
consistently
into
the
right
direction.
So
if
the
reward
is
very
very
sparse,
so
only
a
plus
one,
every
one
million
time
steps.
Then
it's
hard
to
learn.
So
the
agent
needs
to
get
a
positive
or
negative
reward
or
some
meaningful
reward,
starting
with
random
action.
B
So
if
it's,
if
it
never
gets
any
relevant
reward
based
on
these
random
actions,
then
it
will
never
learn
anything.
So
there
are
a
few
things
to
look
out
for
and
that
I
have
learned
to
take
into
account,
but
I
think
there
is
no
cookbook
for
a
great
reward
function.
Yet
does
that
answer
your
question?
Yeah.
A
A
So
we
will
switch
to
our
last
presentation
for
for
this
meeting,
which
will
be
handled
by
a
student
of
jerome
jerome.
I
think
you
have
the
slides.
Do
you
like
to
share?
I.
K
K
Okay,
great,
you
can
see
my
screen
right.
It's
all
good,
okay,
hello,
everybody!
I
am
I'm
a
phd
student
at
india
and
orange.
I
work
with
both
labs
and
today
I'm
going
to
talk
about
problems
and
strategies
in
implementing
ai
models
in
network
with
that.
K
So
what
are
some
of
the
benefits
of
in-network
computation?
So,
let's
start
with
like
what
exactly
is
a
network
computation?
It's
basically
taking
an
application
or
service
and
offloading
parts
of
it
or
the
whole
thing
completely
on
the
network
saw
on
the
data
plane.
Essentially,
there
are
a
couple
of
benefits
that
doing
doing
this
has
a
lot
of
benefits.
K
One
is
the
latency
reduction.
Generally
speaking,
you
services
are
offered
by
servers
that
are
connected
to
the
network
and
having
those
services
in
network
means
that
they,
you
have
a
lower
response.
Time
to,
you
know
to
request
because
they
are
serviced
in
network
rather
than
going
being
routed.
You
know
between
different
nodes
and
network.
K
K
K
Network
network
devices,
specifically
switches,
are
really
efficient
and
they're
designed
to
do
one
specific
thing
that
is
to
classify
and
route
traffic
so
leveraging
that
capability
of
a
switch
to
perform
to
perform
computation
will
save
energy
and
will
which
will
also
save
energy,
because
the
end
servers
can,
you
know,
remain
idle
or
go
into
a
low
power
state,
thereby
the
overall
energy
savings
are
are
considerable.
K
So
what
are
some
of
the
applications
that
are
being
pushed
in
network
onto
the
data
plane?
K
There
are
a
lot
of
security
applications
like
ddos
detection,
algorithms,
anomaly,
anomaly,
detection,
classific,
classification,
algorithms,
that
are
being
pushed,
so
this
could
be
something
like
a
logistic
regression
function
that
runs
on
a
switch
and
checks
flows
to
see
whether
they
are.
You
know
whether
they
are
anomalous
or
not.
You
have
low
latency
applications.
K
This
could
be
dns
caches,
which
you
know
which
rest
completely
on
the
switch
stuff
like
that.
You
have
scheduling
and
congestion
control,
algorithms
like
rcp,
which
are
being
completely
pushed
in
which
have
been
you
know
there
have
been
various
papers
related
to
this.
Then
you
have
a
network
aggregation.
K
So
what
are
some
of
the?
What
kinds
of
hardware
is
generally
used
for
in
network
computation,
so
the
traditional
method
is
using
middle
boxes
and
you
have
three
types:
you
have
the
the
the
most
basic
dedicated
custom
hardware
middle
box,
which
is
designed,
which
is
purpose
built
to
do
one
specific
task,
whether
it's
a
fireball
or
some
sort
of
acceleration,
etc,
etc.
And
this
is
now
this
this
this
sort
of
hardware-
you
cannot
really
change
it.
You
cannot
really
do
anything
else
with
it.
K
Then
you
have
the
more
recent
and
more
flexible
ones
that
are
coming
out,
which
is
with
the
with
the
the
x86
middle
boxes,
which
is
basically
a
server
with
a
really
fast
smart
nic,
and
you
have
all
sorts
of
stuff
like
snort,
which
is
a
ids
network
ids,
and
the
more
recent
ones
like
nfv
are
also,
you
know,
becoming
more
prominent.
Aws
has
a
a
good
platform
for
that.
It's
called
nfv
mano
and
then
you
have
the
hardware
switch-based
in-network
implementation.
K
A
programmable
architecture
and
languages
like
p4,
which
allows
you
to
describe
switch
behavior
on
on
switches
and
smart
mix,
etc,
etc.
K
So
tofino
is
one
example
of
such
a
a
chip,
an
rnp
chip
which
can
you
know
be
which
can
be
controlled
by
programmatically.
K
So
what
are
some
of
the
drawbacks
and
limitations?
Middle
boxes
generally
tend
to
increase
latency,
because
you
have
to
go
through
the
the
network
stack
of
these
devices.
They
are
generally
purpose-built,
not
flexible
and
each
new
service.
You
want
to
add.
Generally
speaking,
you
have
to
buy
the
hardware,
and
this
could
this
could
add
to
additional
costs
and
having
many
metal
boxes
for
various
services
could
easily
clog
up
the
network
really
fast
programmable
switches.
K
On
the
other
hand,
switches
in
general
exist
in
the
network,
so
they're
already
there,
and
so
that's
one
of
its
advantages,
but
the
the
they're
they're
they're
really
restrictive
devices,
so
to
speak.
They
they
don't
have
much
memory
on
board.
They
they
don't
support.
Many
of
for
ai,
especially
we
many
of
the
functions
used,
are
real
valued
functions
and
they
they're
not
supported
natively
by
any
of
the
programmable
rmt
or
any
of
the
profiles
which
is
out
there
any
of
the
models
out
there.
K
Most
of
the
logic
you
can
you
can
deploy
on
a
switch
is
relatively
simple,
so
they
don't
support
complex
programs
so
speak.
They
have
very
basic
instruction.
Sets
bitwise
operations,
basic
arithmetic
bit
field
editions
stuff
like
that
they
do
not
have
complex
instruction
sets
like
middle
boxes,
do
and
generally
their
pipeline
is
linear
in
nature.
K
What
that
means
is
that
it's
it's
forward-facing,
it's
it's
it's!
It
generally
goes
in
one
direction.
You
cannot
go
back
a
step.
It's
it's
linear,
in
that.
In
that
way,
so
many
of
the
algorithms
you
we
use
for
real
value,
computation,
etc.
K
They
are,
they
have
loops
in
them,
and
you
know
they
have
structures
like
that
and
they're
harder
to
implement.
Because
of
this,
and
of
course,
complex
feature
sets
are
not
available
on
switches
as
of
right
now,
so
so
to
counter
some
of
those
drawbacks
and
like
limitations
of
switches.
K
These
are
some
of
the
current
solutions
that
are
out
there
that
have
you
know
that
have
been
published
over
the
the
last
few
years.
The
most
common
way
to
do
computation
is
using
lookup
tables
and
luca
tables
are
very
popular
because
essentially
switches
do
just
that
they
look
up
values
and
then
they
classify
packets
based
on
the
looked
up
value,
so
lookup
tables
are
very
popular
and
several
actually
most
of
them
most
of
the
research
into
computation
and
such
in
network
is
on
a
switch,
is
using
lookup
tables.
K
So
this
is
one
example.
I've
given
illusive,
which
was
yeah,
then
floating
point
numbers
generally
most
python,
like
they
also
use
floating
point
numbers
single
precision.
Floating
point
numbers
doubles
stuff
like
that.
This
is
considerably
hard
in
a
switch
because
of
because
of
the
way.
K
Some
many
many
of
the
so
many
of
the
models
require
complex
features
and
these
features
are
not
easily
accessible
on
a
switch
and
several
papers
have
addressed
this
by
adding
by
taking
by
using
other
external
servers
to
embed
features
in
a
header
packet
header
and
then
extract
them
on
the
switch
for
for
for
computation.
K
K
And
there
have
also
been
models
where,
for
example,
where
you
split
tasks
which
switch
can
do
better
on
the
switch
and
some
and
certain
parts
of
the
computation
on
the
cpu.
K
This
is
one
example
with
it's
a
system
called
banana
split
which
split,
which
splits
one
a
certain
part
of
a
binary
neural
network
on
the
on
the
on
a
smart
nic
and
the
other
part
is
done
on
the
cpu
of
a
general
purpose
computer,
so
that
so
this
some
of
the
the
work
I
have
been
involved
with
over
the
last
year
is
has
been
to
enable
it
has
been
to
create
pipelines
for
programmable
switches
to
to
to
implement
real
value
functions.
K
So
the
idea
is
to
take
a
real
value
function
and
then
implement
a
pipeline
which
will
compute
that
real
value
function.
K
So
the
idea
we
want
to
provide
a
platform
so
that
we
could
deploy
all
sorts
of
models,
machine
learning
models
and
other
kinds
of
models
in
network,
and
we
wanted
a
a
a
sort
of
framework
that
could
easily
generate
that.
K
So
how
do
we
do
this?
We
start
by
it's
first
step,
so
I'm
going
to
briefly
describe
how
it
works.
We
start
by
taking
a
function
and
we
start
by
defining
these
elementary
operations.
Like
addition,
logarithm
division
and
we
implement
them
using
the
mechanisms
and
the
instruction
sets
that
are
provided
by
the
switch.
That's
the
first
step,
then,
given
a
function
so
take
this
f
of
x
y.
K
We
break
it
down
into
its
elementary
operations
and
we
sort
of
create
a
graph
a
directory
cyclograph,
always
it
shows
all
the
dependencies
and
all
the
operators
and
variables
that
exist.
Then
we
constrained
this
graph.
For
example,
if
x
represents
a
port
number,
then
we
know
that
x
can
be
between
0
and
65
353.
K
If
it's
x
represents
bandwidth,
then
we
know
the
bandwidth
of
the
port.
So
that
could
be.
You
know
a
constraint.
Another
constraint
here
is
the
sine
node
over
here.
We
know
it
will
always
output
something
between
-1
and
1..
So
we
know.
The
input
to
log,
for
example,
is
always
going
to
be
written,
so
we
constrained
we
assign
constraints
to
each
node
and
and
then
we
perform
aggregation
since
we
know
that
the
input
to
log
is
always
going
to
be
between
-1
and
1.
In
this
case,
we
don't
need
to
account.
K
We
don't
need
to
store
any
pre-computation
that
for
any
of
the
other
possible
values.
Just
for
that.
So
we
aggregate
we
minimize
our
our
graph
here
and
then
we
substitute
so
back
here.
We
had
defined
a
bunch
of
primitives
for
each
elementary
operations.
We
substitute
them
here
and
we
we
further
simplify
it.
K
We
simplify
the
graph
and
and
then
we
use
a
combination
of
linear
programs,
linear
programming
to
place
the
various
compu
place,
the
various
nodes
of
this
computer
graph
on
the
switch
pipeline,
and
we
also
solve
something
called
the
rectangle
packing
problem,
which
we've
mapped
to
this,
to
reduce
error.
K
Basically,
so
in
conclusion,
in-network
computing
offers
several
advantages:
unlike
increased
throughput,
latency
reduction,
power,
saving,
middle
boxes
and
programs,
the
most
common
ways
of
in-network
computations
achieved
programmable
circuits,
despite
being
fast
heavily
resource
limited,
are.
K
Are
the
most
common
way
of
achieving
this?
Several
workarounds
have
been
proposed,
as
I
mentioned
before.
Currently
we
given
the
state
of
things
right
now,
we
are
far
from
having
any
sort
of
complex
functions,
several
limitations.
We
can
have
small
functions,
but
not
anything
complex
or
a
full
network.
For
example,
we
have
managed
to
implement
logistical
regression,
functions
up
to
four
variables,
k-mean
classifications
in
our
lab,
and
we
are
planning
on
expanding
it
from
a
single
switch
into
a
whole
network.
K
C
Thank
you
very
much
questions,
unfortunately,
we're
a
bit
running
out
of
time
and
we
we
had
no
time
to
take
questions
online.
So
as
as
for
the
first
talk,
I
mean,
if
you
have
any
question,
you
can
just
send
to
the
mailing
list
and
yeah.
I'm
sorry
because
we
have
strict
strict
deadline,
releases
type.
Sorry
so.
C
Yes,
so
maybe
again,
I
just
want
to
thank
you
all,
the
all,
the
presenters,
all
the
participants
and
sorry
for
presenter
to
to
be
to
to
be
a
bit
like
that
to
cut
the
the
the
question
time
and
so
on.
I
will
try
to
to
have
a
meeting
a
bit
less
dance
for
for
next
time.
Yes,
so
thank
you
again,
and
so
as
as
usual,
we
will
you
will
put
minutes
online.
C
There
will
be
the
the
recording
available
on
the
youtube
channel
and
I
hope
to
talk
to
you
soon
I'll
see
you
soon
in
the
close
future.
So
I
don't
know
if
you
want
to
say
something
else.
A
No
same
as
usual,
thank
you
for
attending
the
meeting
we
keep
in
touch
and
we
will
send
you
information
about
our
next
meeting,
probably
in
april.