►
From YouTube: IETF106-NMRG-20191122-1220
Description
NMRG meeting session at IETF106
2019/11/22 1220
https://datatracker.ietf.org/meeting/106/proceedings/
A
B
C
We
are
seeing
other
new
Papa,
but
do
those
who
path
can
be
done
with
existing
ones
and
I
know
from
my
own
experience,
we
are
running
into
many
problems.
If
you
try
to
do
basic
economic
with
the
existing
medical
management
architecture
will
be
interesting
to
hear
from
the
academic
community.
If
some
some
of
them
are
moving
into
your
data
management,
market,
letters
and
I
will
load
with
this
imported
with
them.
Reading
some
papers
or
exam
proposals
that
we
understand
fitted
with
the
same
architecture
that
there
is
around
for
ages,
if
I
will
just
read
them.
C
D
Perhaps
the
most
energetic
of
the
research
groups
we
have
over
the
last
few
years,
so
I
would
encourage
you
to
try
to
make
a
focus
on.
You
know
actually
concluding
that
and
publishing
some
documents
and
getting
the
work
out
in
terms
of
AI
for
network
management.
That's
certainly
a
potentially
you
know
a
plausible
topic
for
the
future.
I
noticed
looking
at
the
agenda
for
today
we
have
one
research
presentation
and
then
a
bunch
of
reports
from
other
organizations,
and
you
know,
challenges
to
stimulate
research,
I
think
stim.
D
You
know
if
you
can
stimulate
research
and
if
we're
starting
to
see
AI
for
network
management
research
being
brought
here,
then
you
know
that
that
would
be
a
good
sign
for
the
chattering
for
the
reach,
a
Turing
if
we're
simply
seeing
continued
reports
from
other
organizations,
but
that's
less
of
a
good
sign.
So
you
know
if
you
can
stimulate
research
in
that
topic,
and
it
should
show
that
there's
a
community
and
interest.
Then
you
have
that's
something
we
should
consider
for
the
chattering
as
a
very
small
site
issue.
D
E
Hey
this
is
Alex,
come
just
to
comment,
a
respond
to
Colin's
remarks,
just
regarding
I
see
actually
there's
a
lot
of
activity
and
research
interest
for
AI
in
network
management
in
other
communities,
notably
I,
Triple,
E,
I,
Triple,
E,
communication,
society
and
I.
Think
by
having
the
Charter
here.
This
is
a
lot
of
work
going
on.
So
one
question
is
big:
how
can
we,
how
can
we
can
be,
perhaps
attract
some
of
those
communities?
It's
supposed
to
come
here
and
believe
and
true
for
anybody
as
a
as
a
relevant
venue
for
that
and
I.
E
D
So
Colin
Perkins
again
as
I
say:
if,
if
there
is
evidence
that
there
is
a
community
being
formed
here
to
do
AI
work,
then
I
would
support
you
adding
that
to
the
chatter
I,
don't
think
it's
appropriate
ad
work
to
the
charter
with
the
goal
of
stimulating
people
to
come
here.
I
want
to
see
that
there
is
interest
before
I.
Do
that.
A
F
G
Manish
oza
moppets
here
no
chair
hat
just
me
actually
I'd
like
to
maybe
as
raising
our
research
interest.
Maybe
we
want
also
to
look
not
at
what
just
a
I
can
bring.
Well
what
not
that
how
we
can
put
AI
in
network
management,
but
also
how
a
I
can
help
network
management
work
in
the
network.
There's
a
lot
of
network
management
that
requires
a
lot
of
data.
You
move
need
to
move
that
data
somewhere.
G
How
come
then
the
network
help
the
network
management
and
not
just
and
the
AI
in
terms
of
a
reduction
of
data
and
everything
and
not
just
add
some
kind
of
rule
based
into
network
management.
We
lose
a
lot
of
the
stuff
I've
see
but
saying,
okay,
so
knowing
that
we're
gathering
all
that
data
everywhere,
how
can
and
those
are
needed
for
the
AI
to
work
and
for
the
AI
in
those
network
management
things
to
work?
Well,
how
can
the
network
help
that
so
looking
at
it?
The
other
way
around
a
comment.
A
Yeah,
thank
you
for
your
comment
regarding
marriages
specificities,
a
failure
in
the
relationship
with
a
network
management
I
think
we
will
at
the
end,
we'll
have
this
kind
of
wrapper
for
the
side
meeting.
The
ovens
are
some
related
IDs
to
Melrose.
They
are
just
expressed
now.
I
think
we'll
have
more
time
to
discuss
a
bit
more
in
the
on
that
aspect.
I
think
we
have
to
move
on
to
the
next
to
the
presenter.
I
I
H
Do
you
hear
me
yeah,
very
good,
okay,
thank
you,
so
I
would
say
that
now
he's
starting
directly.
First
again
thanks
to
Shalom
norm
for
inviting
me
and
to
have
the
opportunity
to
present
parts
of
our
work
here
and
I
was
also
following
know
that
the
introduction,
a
discussion
about
like
terminologies
and
so
on
so
I
will
also
use
the
safe,
thriving
part.
So,
oh
so
don't
be
so
much
upset.
H
I
think
there
are
a
lot
of
terms
about
what
is
AI
and
so
on
and
networking,
and
we
should
really
get
to
the
core
and
discuss
about
that.
So
what
I'm
descending
is
a
kind
of
joint
work.
I
did
together
with
some
people
here
from
human
egg,
but
also
from
Vienna
and
also
from
Hungary,
and
so,
let's
start
directly.
So
can
you
go
to
the
next
slide?
H
So
that's
all
what
we
know
about
networks
and
steering
networks
today,
so
the
human
is
in
the
loop
and
what
the
human
is,
of
course
always
doing
is
like
monitoring
the
network
identifying
there
by
networking
problems
and
then,
when
we
have
identified
problems.
Of
course,
what
we
are
doing
as
humans
like
we
trying
to
solve
these
problems,
so
we
optimizing
networks
to
to
get
some
solutions,
and
what
is
also
they
are
before
we
put
anything
into
into.
In
fact
is
that
we
are
designing
like
the
performance
evasion
as
well,
so
we
we
think
about
okay.
H
H
H
So
it
should
be
your
your
your
friend
and
your
helper.
It
should
augment
your
research.
It
should
help
you
to
develop.
The
protocol
should
help
you
to
develop
the
algorithms
and
then
on
the
next
slide.
So
next
slide:
please
we
see
that
with
AI
or
machine
learning
the
ideas
to
have
safe
monitoring
networks
so
that
the
network
itself
can
trigger
the
actions
to
identify
problems,
then,
after
that,
that
the
network
is
actually
equipped
with
opportunities,
with
actions
to
also
self
optimize
to
go
towards
the
solution.
H
And
what
I'm
focusing
on
in
this
talk
is
the
third
part
which
is
about
a
self
benchmarking.
So
the
network
should
also
have
some
ways
before
putting
solutions
into
into
reality
when
to,
in
fact
that
it
can
sum
up
self
benchmark
and
see
how
the
solution
it
thinks
is
the
best
for
the
network
should
look
like
so
next
slide
and
what
is
the
traditional
way?
Actually
here,
I
mean
that's
what
we
have
in
research.
We
have,
for
instance,
the
traces
we
have.
H
Sometimes
we
have
the
models
if
we
don't
have
to
trace
it
so
abstract
from
the
trace.
If
you
have
the
models
from
the
network
traffic
and
then
if
we
want
to
try
or
test
solutions,
actually
we
are
doing
some
kind
of
best
guesses,
so
humans
also
in
the
loop-
and
it's
just
like
trying
to
select
some
points
and
configurations,
for
instance,
for
traffic
measurements
based
on
his
idea
or
what
what
it
should
be
and
what
we
have
in
mind
is
not
a
data
driven
approach.
H
H
But
of
course,
if
you
don't
look
in
their
papers,
what
we
see
there
is
like
they
have
models
the
problem
with
model,
as
always,
it
might
not
generalize
well-
and
it's
also
not
covering
what
can
happen
in
the
future
was
also
maybe
not
covering
the
extreme
cases,
and
the
third
part
is
even
that.
Sometimes
what
we
have
was
in
research
is
people
are
proposing
algorithms,
and
maybe
they
are
just
taking
some
points
that
are,
let
look
the
items
like
shiny,
so
they
designed
a
performance
innovation
way
that
that
it's
good
for
their
solution.
H
H
This
is
a
kind
of
the
traditional
chain,
I
mean
we
have
the
the
benchmark
instance
generator,
for
instance,
for
protocols
for
algorithms,
for
routing
algorithms
you're,
generating
problems
like
routing
from
A
to
B.
So
we
have
the
problem
instances.
This
will
be
then
feed
into
our
solutions
like
like
network
algorithms
and
function
again,
and
then
the
algorithms
or
the
functions
they
are
behaving
in
a
way
that
we
get
the
problem
solutions.
So
this
is
the
traditional
way
so
next
slide.
H
What
we
have
already
done
over
the
last
three
years
is
to
think
about
where
to
integrate
or
how
to
integrate
machine
learning,
artificial
intelligence
there
in
particular
what
we,
what
we
are
doing
or
what
we
were
doing
is
like.
We
want
to
learn
first
from
the
problem
instances.
So
this
is
like,
when
you're
doing
this
classical
traffic
analysis
and
try
to
identify
how
the
traffic
looks
like
or
how
the
problem
instance
actually
looked
like.
So
where
do
we
have
our
routing
request?
H
How
does
our
demand
matrix
looks
like
and
then,
of
course,
when
we
are
continuously
solving
problems,
we
can
somehow
get
some
information
from
there.
So
so
we
can
also
see
like
the
problem
solutions
and
combining
both
the
ideas
that
we
can
have
existing
systems.
Network
algorithms
also
here
to
identify
problems
and
so
on.
H
This
is
somehow
and
work
we
have
already
done
over
the
last
three
years
and
published
on
some
workshops
at
seek
home
and
in
some
chunks
now
all
way
different
to
that
is
on
the
next
slide,
and
this
is
what
we
think
now
where
we
can
also
plug
in
machine
learning.
So
the
idea
is
that
we
have
on
the
left
side
where
you
are
generating
the
benchmark
instance
where
we
are
generating
like
the
traffic.
H
We
are
generating
the
problem,
instances
that
we
can
also
plug
in
machine
learning,
artificial
intelligence,
because
we
can
also
again
take
the
problem,
solutions
into
account
and
then
thereby
what
we
would
like
to
have
is.
We
would
like
to
generate
out
of
several
problem
instances,
so
we
want
to
have
problem
instances
that
are
challenging
our
network
algorithms
or
functions
that
us
as
engineers
and
researchers
to
identify
really
again
weak
spots
and
thereby
also
to
then
know
where
to
look
for,
let's
say,
a
beach
buzzin
and
develop
the
solutions.
H
There
was
also
one
one
point
mentioned
like
about
the
data,
because
the
data
is
not
always
available.
This
approach
should
also
have
to
overcome
this
a
little
bit
in
the
sense
that
we
should
find
ways
to
generate
data
that
helps
us
really
to
benchmark
our
systems.
So
we
see
a
lot
of
potential
here
that
this
is
another
part
of
the
of
the
of
the
let's
say,
AI
or
ml
chain.
H
What
we
need
to
put
here
and
the
overall
ideas,
then,
when
that
we
have
maybe
in
the
end
machine
learning,
AI
versus
machine
learning
III,
but
you
can
also
say
maybe
machine
learning,
eyewear
sis
to
human
as
a
developers.
So,
when
you're
doing
your
testing,
when
you're
doing
your
your
integration
tasks
when
you're
like
not
you're,
not
testing
and
just
versus
like
unit
tests
for
software,
but
you're,
also
testing
against
a
machine
that
is
generating
input
for
your
programs
automatically.
H
Okay,
so
next
slide-
and
this
is
an
example
that
we
used
in
our
paper-
we
use
the
open,
V
switch
because
it's
still
a
very
prominent
software.
It's
in
the
products
of
remember,
of
course,
not
like
the
open
source
software,
and
then
we
had
a
set-up
with
three
hosts
and
in
this
simple
setup
really,
we
just
had
a
horse
that
was
generating
a
network
trophy
on
the
left
side
and
for
the
generation.
We
had
the
configurations
and
we
had
the
host
on
the
right
side.
H
We
were
sending
the
traffic
to,
and
in
the
middle
of
this,
where
we
were
running
the
open,
V
switch.
We
were
just
monitoring,
for
instance,
the
CP
utilization
of
this
open
we
switch,
but
we're
also
monitoring
the
latency.
So
we
we
took
these
two
matrix
into
a
cart
in
order
to
benchmark
our
system
and
the
transcript
the
open
we
switch
in
this
case
was
just
simple:
there's
just
one
rule
for
forwarding
the
packets
and
the
rest
is
about
drawing,
and
you
have
all
these.
This
kind
of
setting
everywhere
I
mean
if
you
have
firewalls.
H
H
The
point
is
that,
of
course,
this
is
all
what
we
know
that
it's
not
so
easy
actually
to
generate
network
traffic,
because
the
the
configuration
space
is
huge
in
complex.
It's
coming
simply
from
the
number
of
packets.
You
are
sending
over
time,
so
you
can
just
decide
to
send
between
1,000
to
5,000
packets
and
see
what
does
it
mean?
Also,
how
do
you
send
the
packets
over
time?
So
what
are
the
batch
sizes?
H
So
should
you
send
one
packet
per
time
we
should
send
multiples
or
two
to
emulate,
bursts
and
also
what
should
be
there
like
that
the
packet
inter
arrival
time
so
should
it
be
one
millisecond
should
be
30
milliseconds
so
on.
And
then,
when
you
go
to
the
protocols
you
can
say
which
protocol
should
use,
should
you
do
a
VLAN
tagging,
so
you
use
it
and
the
point
here,
because
if
you're
trusting
about
the
poor
header,
you
could
say
that
you
have
opportunities
like
options
through
to
the
number
of
many
bits.
H
So
here's
still
the
human
is
somehow
involved.
So
we
also
believe
that
the
human
cannot
replace
completely.
We
still
need
to
start
somewhere
and
without,
like
the
human
also
being
in
this
loop
would
be,
might
or
might
be
quite
hard
for
the
machine
to
find
anything.
So
the
human
still
needs
to
set
lists,
and
these
are
also
the
parameters
we
then
used
in
our
relation
this
scenario
here
so.
H
So
what
we
appear
is
the
measurement
loop,
so
we
have
the
traffic
generator.
We
have
the
beasts
which
we
have
done,
these
measurement
points
and
they
will
be
automatically
learning
actually
the
performance
model,
and
this
is
the
whole
measurement.
So
next
slide
in
the
end.
What
you
want
to
have
is
the
black
box
is
the
open,
a
switch,
and
we
want
to
know
how
the
performance
model
works
after
the
measurements
over
time.
So
this
is
what
we're
doing
continuously
so
next
light.
H
H
H
If
you,
the
station,
is
quite
so
next
like
and
in
the
end,
it's
actually
not
fabulous,
though
what
we
got
from
a
grid
search
is
that
setting
specific
Network
configurations,
you
have
really
drastic
difference
in
the
CPU
performance
models.
So
when
you
look
at
the
two
crosses
on
the
next
slide,
what
is
interesting
here
that,
for
instance,
when
you
send
5,000
packets
with
an
interval
time
of
one
millisecond,
the
civilization
is
quite
hot
when
you're
sending
four
thousand
packets,
that
civilization
is
quite
low.
So,
when
you're
sending
four
thousand.
H
This
is
what
we
see
on
the
next
slide,
the
point
about
the
openness
which
is
that
whenever
you
are
sending
a
rule
that
for
in
it's
inserting
a
rule
for
this
packet
track
and
when
you're
sending
packets
from
multiple,
for
instance,
IP
subdomains,
then
it's
inserting
rules
for
all
these
subdomains
and
there's
a
timer
for
these
rules,
because
the
space
of
the
rules
is
not
endless.
That
is
always
refreshed
and
this
time
an
obvious
descent
to
set
to
10
seconds
so
next
slide.
H
H
So
what
we
can
finally
see
here
is
net
bore
in
action
versus,
for
instance,
random
search
and
interesting
part
is
that
a
random
search
is
still
somehow.
The
state
of
the
ants
Network
is
for
one
to
four
network
configuration,
no
traffic
configuration
used
here,
always
farsta,
and
always
better
and
more
interestingly,
after
even
like
50
I
duration.
So
when
you
are
continuously
measuring,
for
instance,
over
Oakland,
we
switch,
we
have
already
a
24
percent
higher
civilization
and
we
thereby
could
somehow
discover
this
week's
code
in
our
implementation
of
the
open,
V
switch
so
next
slide.
H
This
is
actually
now
where
I
can
already
conclude.
So
what
we,
what
we
have
the
idea
is
that
we
want
to
generate
or
that
we
want
to
have
in
adversary
inputs
to
find
weak
spots
or
security
holes,
and
this
should
help
us
to
make
our
systems
more
ballot,
proof
and
ideas.
We
need
to
use
concepts
like
network
to
receive
continuous
feedback
about
solutions
and
implementations.
H
The
use
case
is
here,
so
we
use
the
Bayesian
optimization
based
data-driven
approach
to
generate
these
network
traffic
configurations
and
what
we
saw
is
like
we
can
find
these
challenging
traffic
configurations
very
efficiently
and
we
can
use
it
to
either
maximize,
but
also
to
minimize,
like
CPU
latency.
The
open
questions
we
still
see
is
that,
of
course,
the
speeding.
H
H
C
Of
damage,
this
is
really
interesting.
Work
and
I
saw
a
draft
from
Comcast
that
was
reading
some
of
the
questions
and
requirements
that
you're
answering
in
your
work.
I
was
just
trying
to
find
out.
What
is
what
is
the
draft
to
point
you
to
that,
and
you
might,
you
know,
then
work
on
this
together
in
the
ITF.
Thank
you
I'm
on
the
list.
Thank
you.
Okay,.
A
F
F
F
After
the
presentation,
I
mean
I
have
three
minutes
to
answer
your
question,
so
I
will
go
through
by
first
about
value
and
division,
second
about
the
status
and
progress
and
also
introduces
something
active
other
activities
we
have
already
done
so
basically
I
guess,
given
that
I
am
NOT
the
first
time
to
present
this
actually
I
presented
this
about
the
progress
and
status
one
year
earlier,
so
the
I
I
think
now
everybody
are
almost
agreed
about
the
business
value
of
network
intelligence
or
network
AI.
F
Given
that
the
network
becoming
complex
in
terms
of
protocol
and
management,
so
we
really
needed
the
AI
to
assist
the
human
beings
to
do
the
configuration
operation
and
management.
So
we
figure
out
that
we
can
achieve
better
and
network
experience
for
both
the
operators,
as
well
as
enterprises
and
also
for
the
5g
IOT
perspective.
Q.
@Q
s
or
QE
perspective.
This
is
all
helpful,
so
I've
listed
the
actually
this.
This
is
the
Charter
similar.
F
Try
idea,
RDF
group
that
we
did
this
ice
G
was
founded
in
2017
February
and
ever
be
beginning,
a
the
focus,
a
study,
the
use
cases
about
how
do
we
utilize
AI
to
network
operation
and
management,
and
we
propose
that
we
can,
we
may
add,
in
closed
loop,
AI
mechanism
and
based
on
the
Countess
of
where
metadata,
driven
policies
and
also
in
besides
use
cases
study.
We
also
have
generated
AI
abstracted
abstracted
architecture
as
well
as
we
have
some
parks.
F
Ongoing
and
I
will
explain
that
later
to
you
and
to
be
mentioned
that
in
early
this
year,
so
the
ice
tree
extend
is
lifetime
to
another
two
years
and
we
will
continue
about
the
park.
The
reference
pond
and
some
other
activities
I
will
expand
on
you
two
later.
So
this
is
a
one
page
in
a
nutshell,
about
the
summary,
so
only
figure
you
can
see
that
we
already
have
more
than
50
companies
joined
us,
including
14
operators
and
the
we
have
official
teams
and
the
parks
review
teams
from
operators
and
vendors
and
Research
Institute.
F
We
already
studied
the
21
use
cases
in
five
big
categories.
For
example,
we
have
an
instructor
management
network
or
network
operator,
and
network
security
related,
so
I'm
sure
that
if
you,
if
you
want
to
look
at
the
use
cases-
and
this
is
the
right
place-
that
you
can
find
enough
use
cases
that
might
be
useful
for
your
network
and
also
based
on
these
views
and
the
requirement
to
relate
to
that.
You
know:
I
already
started
six
parks,
so
each
boxes
are
supported
by
one
big
operators
and
including
vendors
and
Research
Institute.
F
They
are
already
three
are
completed
and
I
always
explain
one
in
later,
slides
and
beside
those
ones.
I've
listed
all
the
ongoing
and
the
finished
work
items,
meaning
like
a
drafting
F.
So
we
have
use
cases
the
requirement
and
the
terminology
framework,
and
recently
we
have
categorizations
and
the
data
mechanisms
work.
F
So
already
published
the
six
work
items
and
one
white
papers-
and
this
is
because
I
already
imagine
the
use
cases,
so
this
page
gives
you
a
very
quick
lens
up
to
the
requirement
and
the
architecture
given
that
I
have
so
limited
time.
So,
if
you
want
to
know
the
details
are
having
a
almost
a
100
page
of
slices
to
explain
you
in
detail
so
for
the
requirement
that
we,
basically,
we
categorize
the
requirement
into
three
different
categories.
F
So
this
is
service
network
requirements,
functional
and
non-functional
requirements
and
for
the
architecture.
What
I
showed
you
this
screen
is
a
very
abstract,
'iv,
high
level
architecture,
and
we
also
have
a
detailed
with
reference
point
architecture
in
the
document
and
also
we
have
three
actually
the
its
two
mode
and
its
three
ways
of
the
using
the
applying
architecture
to
the
existing
network
system,
including
and
F.
We
are
and
other
architectures
to
get
aligned
with,
and
this
is
another
work
that
we
just
published
in
this
month
is
about
today.
F
Okay,
so
the
the
categorizations
we
learned
a
little
bit
about
from
the
vehicle
I
mean
Auto
driving
autumn
of
the
normal
driving
levels,
designed
it
by
SJ
from
America,
and
this
is
the
way
that
we
can
categorize
the
level
of
the
AI
applied
to
the
net
based
on
different
parameters.
And
this
is
the
park
number
one
is
called
intelligence
network,
slicing
lifecycle
management.
F
They
had
caught
her
se
and
the
yoga
come
to
trial
and
the
year
now,
I
already
send
a
liaison
last
year
and
we
will
probably
send
the
ladies
in
this
year,
Emma
Archana.
We
hope
that
we
can
jointly
I
mean
work
together
and
to
create
more
and
useful
and
practical,
valuable
deliverables
together.
Thank
you
and
any
questions.
F
I
I'll
present
a
bit
the
the
link
between
it
ccsm
activity
and
the
work
that
could
be
done
in
energy
on
artificial
intelligence
at
GSM
is
not
a
standardization
group
doing
per
se
artificial
intelligence
for
networks,
but
it's
more
around
self.
Managing
and
zero
touch
network
management,
Network,
automation,
and
in
that
space
we
are
using
some
attitude.
Intelligence
techniques.
I
For
the
sake
of
time,
I
will
not
read
very
rapidly
over
this
slice,
but
if
you
want,
you
have
some
information
about
the
IHG
that
is
currently
running
this
the
same
architecture.
So
it's
a
management
architecture,
service
based,
service-oriented
management
architecture,
and
you
have
a
set
of
different
set
of
services.
Management
functions,
especially
for
the
registration
for
controlling
resources
for
history,
Asian
and
I
will
make
a
focus
on
specific
set
of
services
that
are,
we
think,
relevant
to
AI
in
network,
so
in
the
currently
defined
set
of
services
in
the
architecture.
I
What
I
try
to
highlight
in
red
the
set
of
services
typically
of
interest
for
doing
AI
networks?
So
we
have
data
collection,
analytics
and
intelligence.
This
is
the
terminology
we
are
using
in
the
SM,
so
you
have
it
per
domain
or
end-to-end,
and
you
have
also
at
the
bottom
right
in
the
dashed
line
that
our
services,
which
are
typically
the
infrastructure
for
monitoring
and
telemetry,
so
going
into
just
a
few
details
about
those
those
groups
so
domain
analytics.
I
This
is
where
you
can
get
together
all
the
services
that
will
realize
processing
over
some
data,
so
the
domain
analytics,
provides
domain-specific
insights
and
generate
domain-specific
predictions
based
on
data
collected
by
the
domain
collection
and
also
data
that
can
be
from
from
other
sources.
This
is
in
support
of
different
types
of
analytics,
so
you
cannot
generate
some
insight,
understand,
what's
happened
in
the
past
and
why
it
happened
insights
and
for
sites
also
to
be
able
to
predict
future
future
events
for
the
network.
So
this
is
also
known.
I
If
you
see
the
other
terminology
for
analytics,
descriptive
diagnostics
and
predictive
analytics
the
prescriptive
part
of
analytics,
it
will
be
more
handled
by
the
intelligence,
but
just
to
summarize
in
analytics,
we
have
a
set
of
services.
You
take
data
input
and
you
try
to
generate
some
some
insights
India
in
the
output
domain
intelligence.
This
is
where
you
have
services
responsible
for
driving
the
closed
loop
automation,
so
this
is
in
support,
I
mean
by
supporting
variable
degrees
of
automated
decision-making
and
human
oversight,
and
we
came
for
fully
autonomous
economic
networking,
so
this
is
Twilight
a
bit.
I
What
we'll
was
mentioning
with
the
different
levels
of
autonomy
networks?
You
have
typical
category
of
types
of
what
the
intelligence
services
can
output
or
can
be
used
so
for
decisions
report
for
decision-making
or
for
action
planning.
This
is
try
to
highlight
a
bit.
You
have
different
forms
of
AI,
either
to
learn
on
data,
but
also
to
reason
on
data
to
make
plans
and
decisions
and
also
to
to
handle
the
aspect
of
knowledge,
management
and
knowledge
religion.
I
So
the
decision
support
services
enable
decision-making
via
technology
such
as
such
as
artificial
intelligence,
machine
learning
and
knowledge
management.
So
this
is
where
we
see
with
the
Manila
analytics
and
domain
intelligence,
where
we
will
have
use
of
specific
AI
techniques
again
in
the
SME.
We
not
designed
such
techniques,
but
we
will
provide
kind
of
agents
of
services
where
you
can
plug
the
different
intelligence
mechanism
and
make
them
use
in
a
more
general
self
management
and
architecture
and,
finally,
data
services.
I
This
is
a
kind
of
common
layers,
a
common
layer
where
we
have
data
services,
provide
means
of
data
persistence
and
enable
data
sharing
between
the
different
consumer
of
data,
so
essentially
domain
analytics
and
domain
intelligence
across
the
management
domain.
So
this
is
a
kind
of
reusable
mechanism
that
any
subscriber
to
the
data
service
system
can
have
to
collect
the
right
set
of
data.
I
Another
area
where
we
see
the
application
of
AI
in
network
management.
This
is
another
part
of
the.
There
is
some
activities
on
closed
loop.
So
in
the
lesson
we
are
working
on,
designing
and
providing
to
provide
interoperable
closed-loop
components,
so
not
the
closed
loop
as
a
whole,
but
at
least
the
different
parts.
You
see
like
a
observation
orientation
decision
and
action
to
provide
this
as
standard
component
on
interfaces
outside
of
the
standard
components
and
so
where
they
can
use
also
some
parts
of
intelligence.
I
So
you
can
see
that,
for
instance,
the
analytics
and
intelligence
services
will
be
reused
as
part
of
the
decision
process
to
close
the
loop
in
such
a
social
representation.
So
that's
all
for
this
short
presentation.
If
you
want
to
look
at
the
architecture,
this
is
a
published
and
public
document,
and
we
are
also
working
on
the
so-called
closed-loop
documents,
where
we
will
have
more
embedded
use
of
AI.
F
I
Yeah
pause,
the
reason
I'm
participating
is
that
SM
Roop.
This
is
another
group,
not
a
thermalization
group.
It's
called
it's
an
emerging
technology
initiative
in
I,
Triple,
E
Comstock
called
metric
intelligence
I'm
here
representing
this,
this
technology
initiative
on
behalf
of
the
the
chairs,
the
officers
because
they
could
not
make
it
especially
due
to
the
time
difference
for
remote
presentation,
so
ETA
e
ni,
it's
a
kind
of
place,
order
or
community,
where
you
to
encourage
research
on
metric
intelligence
to
happen
from
the
I
Triple
E
point
of
view.
So
it's
a
new
community.
I
Rather
new
community
initiative
focused
on
network
intelligence.
It
was
approved
by
concept
December
two
years
ago
and
it
has
been
renewed
again,
so
it
generates
from
and
produced
a
technical
committee
in
I
Triple
E
and
having
some
reshuffling
that
were
targeting
autonomic
communication.
So
we
build
on
top
of
that
and
we
had
also
a
new
a
new
view,
a
new
vision
of
what
network
intelligence
could
be
or
for
networking
so
quickly
on
the
mission.
I
So
this
kind
of
technical
committee
in
I,
Triple
E,
are
here
not
to
actually
do
the
research
but
to
foster
the
research
to
happen.
So
it's
in
support
and
2nded
and
those
research
towards
embedding
artificial
intelligence
in
future
software
define
and
programmable
foreign
in
planes.
So
the
aim
is
to
support
and
endorse
and
enable
faster
deployment,
dynamic
provisioning
and
to
an
orchestration
I
resiliency
availability.
I
But
we
wanted
to
highlight
key
areas
of
interest
for
this
group
so
related
to,
for
instance,
internalized
networking,
which
can
also
home
energy
use
of
different
learning
techniques
and
how
different
learning
techniques
are
useful
or
needs
to
be
used
or
could
be
used
in
a
tracking
environment,
resource
allocation
relationship
to
self
management
and
autonomic
types
of
networks,
learning
and
reasoning
techniques.
So
you
have
a
set
of
different
topics
and
it's
a
can
be
continuously
evolved.
I
I
The
participation
is
open
to
everyone.
It's
not
very
costly.
There
is
a
website
and
mailing
list,
so
you
can
subscribe
to
the
mailing
list
and
on
the
website
you
have
all
relevant
information.
We
try
to
keep
it
very
up-to-date
with
the
different
events
workshops,
special
issues
datasets
that
are
being
brought
by
the
community
so
that
it's
still
always
relevant
in
terms
of
timing,
a
quick
view
on
some
ongoing
activity.
So
since
it's
already
two
years
old
ETI,
we
have
already
in
the
past
done
some
some
workshop
special
issues
but
to
show
it
the
current
activity.
I
There
will
be
the
third
international
workshop
on
network
intelligence,
which
will
be
co-located
with
actually
infocomm
reference
conference
next
year.
This
is
the
second
time
we
co-locate
with
infocomm
last
year
was
very
successful
workshop
and
it
was
also
a
with
a
special
issue
in
a
journal.
So
we
still
have
time
to
provide
your
contribution
to
this.
To
this
workshop,
there
will
be
also
deferred
AI
net
conference,
which
is
co-located
with
the
mpls
World
Congress
in
Paris.
This
will
be
a
first
quarter.
I
Next
during
Paris,
you
can
see,
provide
some
abstract
if
you
like,
but
the
program
is
nearly
closed
now,
so
it
is
also
supporting
the
organization
of
this
conference.
There
will
be
also
the
third
applied
machine
learning
days,
which
is
a
very
broad
machine
learning
type
of
conference
in
Rosanne,
and
we
have
there
a
dedicated
track
on
Aon
networks
that
week
organized
with
some
of
the
officers
and
other
members
of
the
DTI.
This
will
be
end
of
January
in
Lausanne.
I
It's
a
great
event
if
you
want
to
to
know
and
to
connect
with
people
working
in
machine
learning,
but
it
is
not
Network
specific.
You
have
a
eye
for
multiple
domains,
application
we
also
active
in
some
standardization
activity,
especially
what
we
do
here
in
IETF
and
also
in
connection
with
energy,
but,
as
you
have
seen
our
so
people
in
HC,
in
different
groups
and
in
I
Triple
E.
I
We
have
hundreds
of
thousands
of
dimensions
to
consider
when
we
run
the
models,
numerous
set
of
KPIs
or
measurements
and
contrast
to
track.
So
this
is
not
like
in
other
fields,
a
very
simple
area.
It's
very
a
multitude
of
different
data
to
track
the
data.
Also
network
is
very
in
the
different
segments.
You
can
go
into
very
different
types
of
data
for
the
same
thing
in
different
access
in
different
segments
like
the
access,
the
core
data
center,
also
proto,
ecology
related
to
these
different
segments.
I
I
You
have
logs
different
types
of
logs
metrics
alarms,
trouble
tickets,
KPIs
configuration
files,
so
the
variety
of
what
data
is
also
very
huge
compared
to
other
fields
and
so
to
have
an
organization
of
those
data
to
be
able
to
be
exploited
automatically
by
a
traditional
intelligence
technique
is
a
huge,
huge
preparation
effort
just
before
you
enter
into
the
real
problem
of
solving
something
with
which
AI
so
I
think
this
is
I.
Will
don't
say
that
you
know
the
field
is
not
this
way
but
I
think
in
network.
I
It's
very,
very
important
dimension-
and
this
is
a
very
big
challenge
in
I
mean
today,
a
very
large
adoption
of
AI
and
having
more
powerful
results,
also,
so
the
track
intelligence.
Each
eye
tries
to
make
this
aware
in
this
community
and
so
that
everyone
in
the
community
can
work
towards
this
literal
data
challenge
to
exchange
practices
and
knowledge,
and
so
to
bring
together
these
data
sets
and
strategies
from
the
track.
Data
I
think.
A
That's
the
last
slide.
Thank
you.
Laura
comment
questions,
so
I
want
for
myself
as
an
individual.
So
yes,
I,
agree
on
the
last
slide
with
a
network
that
challenge
multi
faceted
problems.
So
what
is
actually
is
easy
role
of
the
what
you
would
be
the
the
action
of
each
guy
in
this
field
in
this
key
activity.
Are
we,
as
you
mentioned
so.
I
So
far,
it's
not
organized.
We
don't
have
clear
path
or
roadmap,
how
to
end
all
this
key
activity,
but
I
mean
at
least
understanding
that
this
is
a
key
challenge
was
the
first
step.
It
was
reported
for
a
set
of
activities
by
different
members
active
in
the
group.
What
they
have
done
in
a
data
with
operational
networks
say
they
really
had
work
for
a
couple
of
years
to
collect
different
use
cases
and
try
to
not
only
work
on
the
use
cases,
but
take
a
step
back
and
say
what
have
we
learned?
I
What
was
the
challenge
is
when
we
have
worked
on
this
use
case
is
not
to
solve
the
use
cases,
but
to
get
the
data
created
data,
so
everything
that
this
was
preparation,
work
for
before
actually
solving
the
case-
and
this
was
just
like
knowledge-
and
we
want
to
share
this
first
understanding
of
what
the
challenges
is.
What,
however,
you
organize
that
in
the
each
is
still
up
as
an
export
and
the
next
step.
C
Demographic,
so
when
you
say
labeling
Network
data
is
a
huge
effort.
I
believe
that
v2
they
have
basic
sets
that
are
people
already
following
instead
and
using
that
as
a
starting
point
and
try
to
create
a
classification
of
that.
That
could
be
helpful
would
be
a
good
starting
point,
because
in
my
conversations
with
network
operators,
especially
the
network
operation
center
people,
they
are
saying
these
are
important
things
to
us.
You
know
for
this
areas
of
the
network,
and
at
least
it
gives
you
some
starting
point.
C
When
you
know
I
was
my
whole
life,
a
vendor,
trying
to
get
everything
and
guessing
what
is
really
needed.
What
is
useful?
It's
a
you
know.
It's
like
a
bingo
and
trying
to
take
that
and
use
this
as
front
starting
points
for
further
evaluation
might
be
a
good
idea.
It
would
help
you
know,
focus
the
the.
I
Research
area,
Thank,
You,
Dianna
I,
fully
agree
with
what
you
say
that
categorization
on
the
operation
side
is
essential
and
it's
a
multi-level
effort,
because
this
is
kind
of
one
hand
of
the
problem,
but
also
understanding
the
diversity
of
what
is
being
collected,
and
you
understand
I
getting
data
I,
don't
know,
what's
the
data
his
or
the,
how
can
use
it
to
reach
to
the
operation
using
the
operation.
So
this
is
a
quite
challenging
activity.
Ego.
J
No,
no,
no,
but
there's
a
problem.
They
don't
talk
to
me
not
seriously.
The
no.
The
point
is
that
I
think
that
is
for
these
and
we
have
sat
Somali
for
that
I
hope
we
will
be
able
to
report
in
Vancouver
or
very
likely
Madrid,
because
the
people
working
there
will
be
working
on
these
will
be
closer.
It's
precisely
about
data
source,
the
description
framework.
J
So
what
you
can
is
you
can
use
some
metadata
formats
to
tell
what
you
are
expecting
to
collect
from
my
particular
data
source
or
what
that
that
data
source
can
provide
I.
Think
I,
don't
know
I'm
not
aware
I'm
coming
here
is,
did
you
know
the
ETA
is
considering
to
initiate
such
an
effort?
Or
this
just
note.
I
I
don't
know
yet
I
mean
this.
Essentially,
this
network
data
is
reflecting
some
activity
that,
for
instance,
a
man
is
doing
in
Iraq.
Sorry.
K
Know
any
organs,
one
thing
that
I've
done
for
quite
a
while
is
work
with
operations,
people
at
large,
brick
and
mortar
enterprises
and
and
one
of
the
things
there's
two
movements
that
I
see
that
are
very
much
at
cross-purposes,
I,
think
trying
to
collect
network
data
and
have
network
intelligence
I
totally
support
that,
but
with
all
the
encrypted
protocols
that
we
have
and
the
push
towards
only
being
able
to
see,
of
course,
packet
level
data,
but
but
also
metadata.
Only
at
the
endpoints
it's
a
network
based
intelligence
becomes
quite
challenging.
I
So
me
personally,
not
because
this
is
not
an
area
I'm
looking
into
one
part
of
the
answer
I
can
provide
from
my
experience
is
a
at
least
part
of
what
we
are
doing
here
is
not
always
use
on
user
data
or
data
traffic.
It's
also
mainly
on
management
types
of
data.
So
even
if
it's
encrypted
or
using
authenticated
communications,
we
have
access
to
some
of
the
time
in
the
logs,
for
instance,
the
configuration
files.
I
These
are
not
in
the
user
plain
it's
more
from
the
operator
use
of
data,
so
this
is
more
a
I
for
network
management.
I
know
there's
a
iPhone
network
in
general,
but
to
answer
your
question
I
know:
maybe
Jean
has
more
insights
on
that,
but
this
is
not
really
not
my
field
of
expertise.
I
cannot
really
comment
properly.
On
that
sure.
A
L
L
L
This
is
an
update
on
current
status
of
ideas,
work
on
machine
learning
for
5g.
The
work
items
which
are
finished
are
here.
There
is
machine
learning
in
future
networks,
including
IMT,
2020
use
cases.
This
is
approved.
This
has
been
derived
from
a
large
set
of
contributions
to
the
ITU
we
have
collated.
It
may
be
tuned
it
and
polished
it,
and
now
it
is
approved
as
a
supplement,
55
wide
or
three
one,
seven,
zero
series:
it
is
currently
in
editing
process
and
it
will
be
published
in
a
few
weeks.
It
will
be
available
for
public
access.
L
You
can
access
it
by
a
guest
account.
Even
non-members
of
ITT
can
access
it.
In
fact,
if
once
it
is
published,
you
can
access
it
through
Internet
Google.
The
next
one
is
a
architecture
framework
for
integration
of
machine
learning
in
future
networks.
So
this
is
published
already
as
wide
or
three
one,
seven
two.
It
is
available
free
of
cost,
so
you
can
download
it
from
our
website
the
framework
for
data
handling
to
enable
machine
learning.
This
is
now
currently
in
ITU
approval
process.
Wideout
three
one.
L
Seven
four
is
the
number
you
need
an
ITU
account
to
access
it
right
now,
but
if
you
once
it
is
through
the
approval
process,
it
will
be
published
and
it
would
be
available
again
freely
perfectly.
We
also
did
some
work
on
evaluating
intelligence
level
of
future
networks.
This
is
a
ninety
approval
process.
It
is
y
dot.
Three
one,
seven
three
next
slide.
Please
I
just
wanted
to
give
you
a
quick
update
on
the
architecture
framework,
don't
go
into
a
lot
of
details,
but
I
want
to
call
out
some
important
parts
of
the
architecture
you
see.
L
It
is
published
already
as
I
told
you
why
dot
372
is
available
already
I
go
from
bottom
to
the
top
under
Lee
Network
concept,
which
on
which
we
run
the
machine
running
pipelines.
You
can
run
multiple
of
these
in
the
underlie
networks,
for
example,
3gpp
could
be
a
Rundle,
a
network
for
us,
I
am
T
2020
architectures
any
of
those
could
be
an
underlay
Network
for
us
ml
pipeline
gives
us
an
abstraction
for
machine
learning.
How
do
you
run
different
types
of
collection,
data
sources
and
applying
machine
learning
in
the
networks?
L
What
are
the
input
and
output
and
models?
This
is
abstracted
by
this
machine
learning
and
we
heard
from
Andrea's
earlier
about
testing
strategies.
Sandbox
gives
us
an
environment
to
test
and
verify
machine
running.
It
hosts
a
simulated
ml
underlay
networks.
You
can
read
the
details
in
three
one,
seven,
two
on
on
the
left
side,
we
have
function
Orchestrator,
which
manages
the
machine
learning
functionality
in
the
network.
These
are
the
main
concepts
that
we
have
in
the
architecture
framework
next
slide,
please.
L
This
gives
us
the
work
items
in
progress,
ongoing,
optimization
and
deployment
framework
for
machine
learning
models
in
future
networks.
This
is
ongoing
as
ml
5g
171
document,
it
is
freely
available.
You
need
a
guest
account
from
the
ITU.
You
can
create
it
in
five
minutes
ml
marketplace:
integration
in
future
networks.
This
is
to
address
the
need
for
some
of
our
operator
members
who
can
who
wants
to
host
a
machine
learning
marketplace
integrated
with
the
existing
marketplaces
as
internal
or
external
marketplaces.
L
L
You
these
are
the
lesson
on
collaborations
we
collaborate
with
ISO
I,
see
see
on
MPEG
linux
foundation.
We
have
some
projects
running
on
model,
optimization
you
heard
from
HC,
and
I
earlier
we
have
official
as
well
as
collaboration
going
on
it
with
HC
ni
and
the
CSM.
We
also
collaborate
with
Oren
on
machine
learning
metadata
and,
as
we
speak,
sa
5tg
PPS
fi
presentations
are
going
on
similar
to
this.
We
have
next
meeting
coming
up
in
March
weekly
conference
calls
are
open
and
you
don't
need
to
be
a
member
to
join
this.
L
J
L
Also,
details
are
worked
out.
We
can
discuss
off
sign
next
slide.
Please
I.
This
is
a
unique
initiative,
I
to
use
offering
guidance
to
university
students.
There
is
a
web
document
which
handles
it.
I
coordinate
the
project
in
case
you
are
interested,
please
contact
me.
We
have
about
4
to
5
countries
right
now,
collaborating
with
a
bit
us
to
run
these
products
next
slide,
please.
L
A
Thank
you,
Vishnu
I
think
we
have
to
continue
with
our
in
the
next
presentation
because
we
are
running
out
of
time.
So
thank
you
again
and
also
to
provide
all
these
link
to
documents
and
so
that
people
can
access
and
can
read
it
and
then
can
continue
to
talk
with
you
for
email
and
so
on.
So
thank
you
again.
Thank
you.
A
So
next
occur
is
a
is
me.
I
will
talk
a
bit
about
the
network,
AI
challenge,
REI
network
challenge.
We
don't
have
a
name
but
anyway,
so,
as
you
may
have
read
on
the
mailing
list,
we
are
talking
and
we
are
starting
talking
about
what
could
be
a
dkt
challenge,
a
challenge
for
networking,
so
that
some
Institute's
that
we
have
seen
before
that
you
see
after
in
the
next
talk,
so
we
are
maintaining
kind
of
pad
to
have
where
we
could
follow
the
different
discussion
we
have.
A
Basically,
we
have
some
more
free
work
items
related
to
what
could
be
the
use
case
to
having
this
challenges
to
be
to
be
a
exploited.
Then,
if
you
know
you
schedule
time,
if
I
owe
you
will
run
the
challenge
in
this
case
we
need
data,
you
need
a
platform,
a
B
and
so
on
and
of
course,
then
there
is
all
the
issues
or,
let's
say,
a
question
all
you
please
challenge
when
it
will
be
and
so
and
so
on,
I
will
I
will
go
faster
sources.
A
The
first
item
was
about
the
different
use
cases,
so
we
are
different
discussing
different
use
cases.
It's
we
don't
have
yet
one
or
two
use
cases
that
we
will
doing
the
challenge
that
you
will
support.
So
this
is
more
open.
This
question,
so
here
is
one
use
cases
of
reaching
that
I
will
really
detail
a
bit
afterwards
well,
which
is
more
8200
to
IBM
and
when
those
are
rich
in
more
classical
with
traffic
prediction
and
so
on.
Of
course
it's
open
to
any
discussion.
A
So
if
you
want
to
bring
some
things
that
you
think
it
can
be
very
interesting,
we
are
very
open.
I
will
rapidly
go
through
this.
Let's
say
routing
of
forwarding
and
traffic
engineering
use
case,
so
why
we
wanted
to
introduce
this
one.
This
is
a
very
a
very
well-known
use,
a
kind
of
use
case.
So
there
is
no,
let's
say,
a
big
fancy
application
that
will
comes
out
from
the
challenge
by
the
idea
that
you
can
be
very
understandable
by
anybody.
Almost
I
mean
so
we
can
have
people
from
other
communities
from
AI
communities.
I.
A
That
is
what
is
very
often
done
as
I
hear
such
what
could
be
a
good
good
good
new
idea
to
try
to
exploit
in
this
challenge
is
that
even
because,
if
you
have
some
reinforcement
clinic,
is
deep
reinforcement
and,
let's
say,
reinforcement
in
general,
you
need
some
real
summary
right,
but
sometimes
you
cannot
know
what
is
this
reward
because
you
don't
have
access
for
it?
Well,
once
the
packet
arrives
at
user
hand.
A
A
M
First
I
would
like
to
discuss
a
little
bit
what
people
do
in
computer
vision,
because
computer
vision
is
a
very
successful
application
of
AI
and
then
we
can
try
to
learn
what
people
in
computer
vision
do.
How
do
they
challenge
work
and
try
to
somehow
learn
something
from
them
and
apply
it
to
network
machine
learning
so
typically
in
in
computer
vision?
That's
very
common
for
many
challenges.
What
they
do
is
the
organizers.
They
provide
a
data
set
of
with
data.
M
What
you
do
is
the
organizers
they
have
and
they
evaluate
the
the
neural
networks
with
a
data
set
which,
which
does
not
have
labels
for
the
participant,
so
they
don't
see
which
are
the
labels
and
they
cannot
do
like
fitting
too
much
to
it
and
so
on.
But
this
is
the
general
approach-
and
this
is
like
a
set
of
lists
of
huge
amount
of
challenges
that
they
are
even
adding
computer
vision
and
some
of
them
they
are
quite
prestigious.
Particular-
is
the
one
organized
from
image
net.
M
So
the
challenge
that
we
are
organizing,
we
call
it
Network
allenge,
it's
a
very
you,
have
the
URL,
where
you
have
some
more
additional
information
and
the
plan
that
we
have
is
the
following.
So
it
is
true
that
it
is
very
hard
to
obtain
from
RealNetworks.
As
far
as
I
know,
no
one
has
been
successful
and
then
it's
a
chicken-and-egg
problem.
Then
we
cannot
organize
the
challenge
and
we
cannot.
If
we
don't
have
data,
we
cannot
start
working
on
real
machine
learning
because
it's
impossible
to
build
a
neural
network
without
data.
M
So
what
we
have
done
is
we
have.
We
are
using
a
network
simulator
which
is
open
it.
It's
event-driven,
it's
per
packet,
meaning
that
each
packet
is
simulated
in
a
network
and
what
we
do
is
a
quite
simple
label.
We
take
one
topology,
one
traffic
matrix
are
realistic:
traffic
matrix
and
a
network
configuration
we
simulated
and
we
generate
the
labels,
which
is
the
label
this
okay,
for
this
topology,
this
traffic
matrix
and
is
a
configuration
I'm,
seeing
this
performance
of
the
flows
like
this
delay,
digital
and
this
losses.
So
it's
very
fundamental
to
computer
networks.
M
M
We
have
like
around
60
gigabytes
of
data,
because
we
need
to
run
many
many
topologies,
with
tens
of
can't
with
tens
to
hundreds
of
nodes,
then
iterate
over
many
random
configurations
and
many
different
traffic
matrixes
to
generate
all
these
labels
and
from
a
previous
paper
that
we
have,
you
can
see
the
data
set.
It's
like
around
60
gigabytes,
and
now
we
are
generating
a
new
one
for
this
challenge,
which
is
being
generated
by
now,
but
in
the
URL
below
you
can
see
the
one
that
we
had
for
the
other
paper
and
it's
basically
this.
M
We
are
also
providing
a
a
Python,
a
p2
for
participants
to
have
faster
access
to
data.
Then
what
is
the
challenge?
The
challenges?
Ok,
given
this
data
set,
you
have
to
build
a
neural
network
that,
given
an
unseen
network,
topology
configuration
and
traffic
matrix
is
able
to
estimate
the
per
flow
delay.
Jitter
analysis,
so
I
will
provide
your
training
data.
M
So
what
we
have
as
baseline
is
rounded
Ramnath
is
a
graph
neural
network
which
is
able
to
generalize
to
unseen
topologies
traffic,
matrices
and
configurations,
and-
and
you
can
see
in
this
paper
the
whole
architecture,
it's
a
prince
or
so
participants
can
take
round
it
and
that's
very
important
and
that
they
have
a
baseline
to
start
working
with
it's.
They
don't
have
to
start
from
scratch,
and
you
will
see
why
this
is
important.
M
So
if
you
take
around
a
vanilla,
vanilla
round
net-
and
you
don't
do
anything
and
you
just
use
it
with
a
new
data
set,
what
you
obtain
is
a
mean
square
error
of
46
percent,
which
is
quite
quite
low.
So
typically,
you
should
expect
a
mean
square
error
of
a
close
to
zero
right
and
why
this
is
important,
because
suddenly
that
in
the
target
audience
you
have
typically
two
people,
either
people
that
are
purely
data,
scientists
that
they
don't
know
anything
about
computers,
and
maybe
they
don't
care.
M
But
what
they
want
is
to
take
this
open
source
code.
This
tensorflow
code
round
it
and
try
to
do
their
own
tricks
and
try
to
increase
the
accuracy
and
the
precision
of
the
of
the
model
just
by
using
purely
at
the
same
district.
So
that's
why
it's
important
to
give
them
a
baseline,
okay-
and
there
are
many
people
doing
this
kind
of
a
stick
of
stuff.
Maybe
they
do
some
hyper
parameter,
optimization
and
so
on.
So
with
this
this
way
we
can
bring
people
from
other
communities
to
networking.
M
The
second
type
of
target
audience
is
us,
of
course,
is
people
working
on
Peter
networks
and
network
ID,
which
they
can
take
the
data
set.
They
can
understand,
they
can
understand
what
is
a
network,
they
can
understand
what
is
AG
referral
network
and
then
they
can
try
to
to
do
something
which
is
way
better,
that
what's
wrong
with
this
today.
M
So
the
dates
tentative
dates
is
because
it
should
be
available
in
spring
2020,
then
participants
they
should
submit
in
the
summer
of
2020.
So
typical
is
two
months
from
the
data
is
available
until
they
can
submit
the
final
solution
and
then
the
award
will
get
will
be
given
in
2020.
We
will
use
this
kind
of
dissemination
channels.
M
This
is
a
target
audience
that
was
discussing
before
the
gaming
paper
is
that
after
the
award
we
plan
to
write
an
academic
paper
this
describing,
how
did
the
challenge
went?
What
did
the
participants
do?
Co-Authoring
with
all
the
player,
with
all
the
participants,
that's
typically
sort
of
a
reward
for
the
first
academic
participant
and
that's
pretty
much
it.
This
is
a
kind
of
question
that
I
have
for
the
working
group
for
the
research
group.
L
I
You
very
much
Albert
and
for
for
presenting
this
practical
network,
a
challenge
we
are
looking
for
that,
of
course,
I
mean
energy
is
really
welcome
to
to
have
this
challenge.
To
give
you
a
space,
if
you
need
for
presentation
for
advertisement
or
if
you
can
work
with
us
for
that,
you're
very
welcome
and
you're
part
of
the
community
anyway,
and
also
I
would
like
to
raise
more
towards
the
the
research
group.
I
mean.
A
Okay,
so
the
the
last
presentation
is
just
to
give
you
a
kind
of
summary
of
the
sizing
things
that
we
have
done
on
AI
on
I
think
it
was
on
Tuesday
so
with
the
people
listed
here.
So
just
not
only
my
my
view,
but
it
just
kind
of
summary
of
what
has
been
highlighted
there,
so
just
because
we
don't
have
so
much
time.
A
So
the
key
point
that
the
objective
of
this
meeting
was
to
identify
the
challenges
of
using
AI
in
our
domain
in
general,
let's
say
network,
so
what
makes
the
use
of
AI
differential
now
domain
model
compared
to
other
domain,
so
just
to
find
the
specificities.
So
we
go
through
some
discussion
and
it's
what
basically
remain
a
service
discussion.
So
there
is
probably
some
challenges
regarding
lightwei,
artificial
intelligence,
so
even
if
at
the
beginning,
such
really
specific
to
network,
of
course,
you
can
think
that
you
may
have
lightweight
devices
in
network.
A
You
want
to
have
more,
let's
say,
AI
in
the
network.
So
of
course
this
is
also
related
to
what
happens
in
coin
Konya
G.
So
this
what
has
been
mentioned
in
there
and
then
it's
also
related
to
all
the
distributed
artificial
intelligence,
Farmington
engine
societies.
Also
here
the
network
is
not
only
user
of
the
AI,
of
course,
but
also
can
help
to
support
my
efficiently
zai
to
synchronize,
between
maybe
the
different
agents
and
so
on.
Those
are
channels
very
classic
I.
Think
we
told
a
bit
already
about
the
data
challenge.
A
I
would
say
so,
of
course,
our
different
problem
is
the
data.
So
you
don't
have
access
are
not
representative,
so
we
have
a
technical
presentation
today,
auto-generate
an
auditor
to
assess
a
system
more
globally.
So
all
the
data
problems
that
you
cannot
access
to
the
relevant
data,
in
particular
in
when
you
compare
it
with
the
learning
environments
or
personal
environment,
so
it
might
not
be
possible
to
make
a
generalization
easy
posse.
So
there
are
different
researchers.
Our
exams
actually
identify
here.
A
Another
point
is
what
we
call
the
problem
challenge.
What
I
call?
Maybe
the
problem
challenge
is
to
find
our
maybe
to
see
what
is
the
right
topic
for
AI
in
in
our
area
kind
of
mapping
between
the
problem
you
may
have
and
the
algorithm
we
would
like
to
use
one
possible
duration
is
rather
than
having
I
want
when
a
ping
try
to
describe
some
attributes.
A
The
problem
is
we
have
in
network
management
or
in
network
and
with
the
same
attribute,
to
try
to
our
different
attributes,
the
AI
algorithm,
because
you
can
have
some
characterization
of
the
different
problems
from
one
side
and
solution
from
the
other
side
and
then
try
to
find
a
mapping.
So
it's
not
a
one-to-one
mapping
between
one
algorithm
and
one,
a
use
case,
it's
more
mappings
that
you
will
do
over
the
attributes,
so
it
might
be
more
general.
A
Of
course
there
is
always
this
problem
of
exploitability
of
the
result
of
AI,
because
maybe
the
different
expert
from
network
and
AI
are
not
the
same,
so
I
must
not
be
complex
to
use.
You
may
want
to
understand
why
yeah,
why
you
algorithm
I
decided
to
do
that.
Asana
in
your
let's
say,
configuration
for
example,
and
he
should
not
introduce
some
overhead,
because
if
AI
is
another
tool,
that
is
a
will,
probably
also,
let's
say,
data
or
no
dashboard,
it's
not
very
useful.
It
just
never
in
and
we
not
work.
A
A
So
it's
our
different
way
to
approach
a
I,
let
say
a
solution
for
a
network
problem,
so
the
one
is
to
think
about
different
level
of
AI
fractions
for
AI,
equal
that
will
recommend
or
that
will
guide
you,
man,
people
kind
of
assistant
and
then
also
you
can
think
about
related
also
to
monitoring.
And
although
you
you
will
use
the
data,
try
to
see
that
just
try
to
decompose
as
the
problem
is
most
small
piece
and
just
try
to
orchestrate
all
see
small
piece
or
more
on
the
level
of
the
orchestration.
A
You
know
we
are
not
really
think.
We
don't
think
that
we
have
to
target
to
have
specific
Edgar
algorithm,
of
course,
but
more
on
G,
or
we
can
use
it
and
orchestrate
all
together,
although
with
a
telemetry,
for
example,
as
well
so
step
to
progress,
society
idea
is
good.
We
have
always
these
meetings
that
we
discuss
a
lot
and
we
have
an
even
not
enough
time
to
this
case,
so
what
we
would
like
to
continue.
A
So
that
is
that
will
do
what
let
say
a
public
document
but
written
documents,
report
and
document
that
will
first
I'll
add
what
has
a
challenges
as
we
started
before
a
bit.
But
of
course
this
need
to
be
refined
to
be
more
discussed,
so
that
could
be
helped
to
clarify
also
the
position
of
artificial
intelligence
in
an
any
other
group.
A
If
it
has
to
have
a
position
here,
and
so
we
need,
of
course,
diverse
input
for
that,
and
we
need
your
help
and
you
help
is
just
not
only
providing
feedback,
so
I
will
send
I
will
ask
you
now
matters
of
sign
on
the
mailing
list.
We
need
people
really
want
you
to
contribute,
write
this
report,
who
is
a
kind
cult
of
course,
team
to
work
on
that
it
should
not
be
on
the
one
to
person.
Writing
this
so
and
that's
it
for
for
me,
so
yeah
one
minute
well,.
I
N
A
You
I
commend
to
Peter.
Okay,
so
I
think
reeling
the
data
format.
Yes,
we
may
think
about
having
some
Universal
data
fermented
I
think
we
already
have
some,
but
although
that
we
all
know
that
at
some
point
we
will
have
to
deal
with
different
data
sources,
and
so
it
would
be
hard
to
say
that
we
will
have
always
this
data
format.
A
So
I
think
it's
good
to
keep
that
in
mind,
because
we
should
try
to
do
may
be
to
aim
that,
but
in
some
point
that
appear
in
the
discussion
was
I,
just
saying
that
we
will
have
universal
data
format
at
less
at
least
three
universe,
a
kind
of
universal
description,
a
way
that
all
of
us
can
understand.
If
you
pry
the
new
that
I
said
that
you
can
understand
your
format
with
some
to
some
extent.
But
of
course,
yes
and
I
think
that's.
It.
I
At
the
end
of
the
session,
thank
you
very
much
for
being
here.
Knowing
it's
Friday
last
session
of
the
week,
it
was
a
long
week.
Thank
you
very
much
for
being
in
the
research
group.
Please.
If
you
have
inputs
on
the
meaning
list
or
contact
us,
we
need
to
understand
what
we
would
like
to
do
in
the
research
group
about
this
topic.
I
We
will
also
discuss
with
the
Iron
Chef
sure,
to
have
a
bit
of
guidance
for
the
next
steps,
because
we
have
been
investigating
is
topic
for
a
couple
of
years
now,
so
we
really
have
to
make
a
decision
how
to
continue.
So
your
inputs
are
very
important
to
know
if
you
have
interests
interests,
also
to
contribute
and
suggestions
for
that.