►
From YouTube: IETF113-NMRG-20220324-0900
Description
NMRG meeting session at IETF113
2022/03/24 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
B
Hello
welcome
everyone,
it's
time
to
get
started,
so
it's
one
of
the
first
meetings
we
have
in
this
context
with
both
on-site
and
remote
participation.
So
please
bear
with
us,
as
we
learn
also
through
this
process.
B
German
myself
quite
sure
we
are
participating
remotely
and
we
have
so-called
local
backup,
diego
lopez,
which
you
can
see
in
the
room
sitting
next
to
the
screen.
So
in
case
there
is
some
issues
or
we
need
to
handle
anything
happening
in
vienna.
We
will
rely
on
diego
for
that,
but
the
meeting
will
happen
in
mythical
see
that
everyone
is
connected
here.
We
will
use
the
the
queue
from
the
from
miteco
so
and
we
will
synchronize
with
the
queue
in
the
room.
B
So
this
is
energy
meeting
for
iitf
113
and
we
need
to
go
through
a
bit
of
not
well
slides
before
we
go
into
the
agenda,
so
intellectual
property,
the
iota
follows
the
ietf
interdictory
of
intellectual
property
rights,
disclosure
rules.
So
by
this
we
mean
that
by
participating
in
the
rtf,
you
agree
to
follow
iotf
processes
and
policies.
B
If
you
participate
in
person
and
choose
not
to
wear
a
red,
do
not
photograph
lanyard,
then
you
can
send
to
up
here
in
such
recordings,
and
if
you
speak
at
the
microphone
appear
on
the
panel
or
carry
out
any
official
duty
as
a
member
of
iotf
leadership,
then
you
can
send
to
appearing
in
recordings
of
you
at
the
time.
If
you
participate
online
and
turn
on
your
camera
and
or
your
microphone,
then
your
consent
to
appear
in
search,
recording
recordings.
B
Privacy
and
code
of
conduct,
so
as
a
participant
in
or
attendee
to
any
itf
activity,
you
acknowledge
that
written
audio,
video
and
photographic
records
of
meetings
may
be
made
public
personal
information
that
you
provide
to
iotf
will
be
handled
in
opponents
with
the
privacy
policy
that
you
can
find
at
the
following
link
as
a
participant
or
attendee.
You
agree
to
work
respectfully
with
other
participants.
B
Please
contact
the
ombuds
team.
If
you
have
questions
or
concerns
about
this,
you
can
find
the
link
in
the
slide,
and
you
can
also
refer
to
the
two
rfcs
concerning
code
of
connect
and
harassment
procedures,
which
also
apply
to
the
context
of
the
rtf.
B
So
thank
you
for
your
attention
for
this
introductory
slide,
and
now
we
will
start
officially,
let's
say
the
content
of
our
meeting
just
before
going
forward
online
meeting
etiquette
so
be
aware
that
this
session
is
being
recorded.
Please
keep
your
audio,
muted
and
video
off
when
not
presenting
or
speaking
and
when's
thinking.
Please
start
by
stating
your
name.
Clearly,
it's
very
important
for
the
meeting
units
and
for
the
audience.
Thank
you.
B
B
Thank
you,
okay,
all
good.
So,
for
today
we
have
a
two
hour
session.
We
are
currently
so
the
agenda
we
are
currently
in
the
introduction
and
rich
research
group
status.
We
have
a
couple
of
additional
slides
in
this
presentation.
B
B
The
main
topic
of
discussion
for
today
will
be
network
digital
twins.
We
have
provisioned
enough
time.
We
hope
to
have
an
insightful
discussion
on
this
emerging
topic
and
we
will
have
several
presentations.
The
first
one
will
be
from
jordy
and
albert
about
how
to
build
a
digital
twin,
comparing
different
approaches
to
that
really.
Looking
forward
to
this
presentation,
then
we
have
also
from
shang,
on
behalf
of
the
other
co-authors,
a
presentation
on
the
draft
on
the
research
group,
draft
network,
digital
twin
concepts
and
reference
architecture,
and
this
presentation
in
the
slot.
B
We
want
also
to
as
a
research
group
to
reflect
on
this
notion
of
network
digital
twin
and
what
research
could
be
associated
to
this
topic,
so
the
problem,
space,
research,
challenges,
questions
and
directions,
and
in
support
to
that,
we
have
also
shared
on
the
mailing
list,
but
also
on
the
hdoc.
B
Finally,
according
to
the
remaining
time
we
will
have,
there
is
also
another
topic
we
would
like
to
to
have
input
on
is
the
evaluation
of
cooperating
layered
asean
architecture
to
include
compute
and
data
awareness
and
lewis
will
be
making
this
presentation.
B
B
Okay,
all
good,
so
let's
continue
so
in
the
research
group
stages.
We
have
a
few
updates
on
the
progress
for
our
documents.
B
So
in
rnc
poll
we
have
the
intent-based
networking
intern
classification,
so
the
irg
member
needs
to
express
their
view
on
the
document.
This
poll
will
end
april,
14
14th,
so
it's
coming
pretty
soon.
The
next
step
will
be
to
if
this
is
positive
to
proceed
towards
a
ietf
conflict
review,
so
this
is
progressing
towards
rbc
publication.
B
The
second
document
second
document
is
in
rsg
review.
So
this
is
the
step
just
before
the
rsd
poll.
It's
intend-based
networking
concepts
and
definition.
This
is
still
under
progress.
We
received
the
review
from
the
irg
member,
and
this
is
going
through
interactions
with
the
co-authors
to
address
the
comments
and
then
be
able
to
proceed.
B
We
have
also
recently
adopted
research
group
documents,
digital
twin
network
concepts
and
reference
architecture,
so
we
will
have
also
more
insight
on
this
document
during
the
technical
topic
later
on.
In
this
session
also,
you
may
have
seen
on
the
mailing
list.
We
have
started
a
call
for
a
research
group
document
adoption
for
the
document
network
measurement
intent,
which
is
one
of
the
ibm
use
cases
of
nmrg.
B
If
not,
let
me
continue,
as
you
may
have
seen
in
the
agenda,
for
the
topic
on
ibn.
We
will
not
go
into
in-depth
technical
discussion,
but
we
wanted
to
show
that
this.
This
topic
is
also
progressing
in
a
number
of
other
groups
beyond
irtf
and
ietf,
and
that
there
is
more
and
more
also
standardization,
open
source,
but
also
research
activities
on
this
topic-
and
this
is
just
let's
say,
a
a
broad
overview.
B
It's
not
exhaustive,
but
we
hope
it's
still
important
for
the
community
in
energy
to
be
aware
of
those
of
those
other
activities
and
some
of
the
research
group
participants
are
already
well
connected
to
some
of
those
activities,
so
we
expect
also
to
have
collaborations
in
those
contexts.
B
So
what
we
have
for
other
groups
in
4r
in
the
linux
foundation,
the
onapp
project,
where
there
are
several
ibn
use,
cases
being
developed
and
part
of
the
different
honor
premises,
the
link
provided,
you
can
see.
Let's
say
it's
an
overview
of
the
different
intent-based
networking
use
cases
appearing
in
the
different
releases
and
by
browsing
on
the
on
up
wiki
looking
for
internet
base.
B
You
can
find
those
different
use
cases
and
more
details
about
who
is
involved
and
the
exact
topic,
but
it's
interesting
to
see
that,
since
several
years,
onap
is
continuously
pushing
to
have
intern-based
networking
use
cases
also
being
part
of
the
technologies
developed
in
onap
in
in
etsy,
the
zerotouch
network
and
service
management,
isg
and
hello.
I'm
sorry,
diego
has
been
recently,
I
point
as
a
new
chair
of
this
isd.
B
So
if
you
have
any
questions
on
this
group,
you
can
also
turn
to
diego,
but
there
are
a
couple
of
activities
also
in
this
isg
related
to
intern-based
networking.
So
there
is
a
group
report
group
report
11
which
targets
intern-driven
autonomous
networks.
It's
a
study
to
try
to
understand
the
different
definitions,
techniques
and
mechanism
of
intent,
driven
aspects
in
relationship
to
zero
touch
management,
and
this
document
is
triggering
a
lot
of
interesting
discussions.
So
I
invite
you
to
to
try
to
read
it.
It's
a
part
of
an
open,
open
draft.
B
It's
publicly
accessible
and
there
is
also
a
proposal
for
a
proof
of
concept
on
automation,
of
intent
based
plus
lead
line
service.
This
is
proposal
by
members
of
this
isg
about
using
the
zsm
specification
in
the
specific
context
of
intern-based
clause
line
service
in
the
itu.
You
have
also
a
focus
group
on
autonomous
networks
and
in
this
focus
group
there
are
a
set
of
activities.
B
Which
include
use
cases
or
poc
also
related
to
intent,
based?
Not
all
the
activities
are
related
to
internet,
but
you
can
find
a
few
of
them
and
the
main
document
you
can
find
is
on
the
poc
and
proposal
for
builder
tone,
where
you
can
see
the
latest
teams
willing
to
propose
activities
related
to
that
as
part
of
the
focus
group
activities.
B
Finally,
you
have
also
the
at
the
tm
forum
autonomous
network
project.
They
have
an
interesting
set
of
documents
addressing
specifically
intern
based
networking,
so,
for
instance,
ig
1253
intent
in
autonomous
networks.
You
can
find
the
latest
draft.
It's.
You
just
think
I
think
to
have
a
login
on
tmf
to
access
it,
but
it's
also
part
of
a
broader
set
of
documents
which
I
will
quickly
show,
and
this
is
an
interesting
development
because
they
have,
as
you
see
several
aspects,
they
really
want
to
specify
relate
in
relationship
to
intent.
B
So
the
document
I
was
mentioning
is
the
one
here
in
the
middle,
but
you
see
that
they
want
also
to
go
into
more
modeling
aspects,
different
capabilities,
api
development,
etc,
and
this
is
ongoing
activities
as
part
of
this
autonomous
networking
project,
which
I
think
is
a
very
interesting
development,
more
on
the
say,
events
or
research
side.
There
are
also
a
number
of
activities
ongoing
again.
This
is
not
exhaustive.
B
So
I
think
this
could
be
an
interesting
experience
for
also
researching
participants
to
really
have
a
more
practical
use
of
intern-based
networks,
also
more
and
more
papers
and
special
issues
in
the
literature
again.
This
is
just
an
extract
of
some
of
the
recent
paper
essentially
published
last
year,
as
you
can
see
different
special
issues
in
different
different
journals,
but
also
some
articles
either
survey,
type
of
articles
or
dedicated
approaches
for
a
global
view
on
them
on
intern
based
networking.
B
B
And
finally,
for
for
this
part
of
the
meeting,
an
emergency
research
group
stated
future
meetings.
The
main
thing
we
will
have
upcoming
will
be,
let's
say,
a
series
of
interim
dedicated
meetings,
especially
for
follow-ups
on
the
intern
based
use
cases.
B
B
We
have
received
a
draft
on
this
on
this
notion.
It's
also
linked
to
the
ai
document.
There
could
be
also
two
interesting
projects,
so
we
are
trying
to
see
how
to
organize
something
meaningful
proposition
from
the
research
group
to
have
a
good
discussion
on
this
topic.
We
will
see
what
will
come,
but
we
try
to
have
something
before
the
summer
on
this
in
the
next,
let's
say
plenary
meeting.
Currently
we
plan
to
go
to
the
next
ietf,
but
this
remains
to
see
the
feasibility.
B
Jerome,
I
think
you'd
like
to
take
over
now.
C
Yeah,
so
if
there
is
any
comments
or
questions
what
you're
presenting
before
we
continue.
C
Okay,
all
right
so
yeah.
The
next
next
item
on
the
island
was
to
give
you
a
status
up
with
the
document
regarding
the
ai
research
challenge
for
network
management.
So
I
I
I
will
give
you
an
update
and
from
the
previous
previous
version
that
he
was
was
a
version
three.
C
So
I
know
that
this
document
was
here
for
a
while.
We
so
this
document,
just
maybe
to
to
to
make
it
clear
for,
is
a
shared
document
that
we
work
collaboratively.
But
it's
not.
It
is
just
a
google
document.
Actually
we
we
thought
that
it
was
better
to
work
on
it.
So
approximately
what
I
would
say
is
that
not
exactly
one
year
ago,
but
almost
one
year
ago
we
decided
to
first
freeze
the
challenge,
the
challenges
where
we
want
to
document
in
this
document.
C
Actually
this
was
a
temporary
freeze
because
of
course,
we
are
not
saying
that
we
will
be
exhaustive
to
have
whole
challenges
because
we
may
miss
some
some
of
them.
There
will
be
news
that
can
also
came
out
due
to
some
context,
and
so
but
at
least
we
wanted
to
freeze
it
in
order
to
really
progress
and
because,
at
that
time
we
have
some
bullet
points
for
challenges.
We
have
some
idea
a
lot
of
ideas
actually,
but
not
something
very
strict.
C
So
we
have
presented
this
list
and
then
we
have
asked
some
particular
contributors.
We
were
quite
willing
to
to
lead
the
first
with
me
to
try
to
consolidate
all
the
input
we
got
regarding
the
different
challenges,
so
we
define
templates
that
you
can
see.
Basically,
here
we
try
for
each
challenge,
have
some
motivation,
a
very
quick
state
of
the
art,
not
not
a
big
in-depth
survey,
but
at
least
the
quick
survey
regarding
the
challenges
and
in
patera
twilight.
What
are
the
remaining
problems?
C
Because
in
many
cases
we
have
already
tried
to
use
ai
for
the
challenges
it
works.
It
does
not
work
well
and
so
on
and
try
to
highlight
what
I
mean
the
remaining
problem
and
maybe
some
really
recent
results
or
orientations
that
we
could
investigate.
Tobias.
I
think
it
was
very
quite
successful
because
we
had
a
lot
of
challenges
that
were
well
documented,
so
I
really
want
to
thank
to
thanks
all
the
contributors
here.
C
Let's
say
editors
that
helped
me
to
put
that
in
the
more
let's
say
a
nice
way
at
the
time
we
have,
we
had,
let's
say
quite
let's
say
detailed
description
of
the
challenges
with
a
lot
of
references,
and
it
was
very
good
to
to
structure
a
bit
of
the
id
that
you
put
behind
the
challenge.
We
have
only
two
challenges.
What
we
didn't
had
really
input
was
one
what's
about
the
acceptability,
but
actually
it's
what
I
will
call
meta
challenges,
because
we
have
some
sub
challenges
that
are
also
related
to
acceptability.
C
I
don't
think
this
is
a
big,
the
big
issue
and
one
is
about
the
email
in
the
loop.
I
think
we
had
a
lot
of
discussions
that
we
need
to
have
the
human
into
integrity
with
ai,
even
in
our
domain,
for
network
for
property
network,
and
we
actually
have
not
really.
We
did
not
really
describe
one
of
these
challenges
at
that
time
and
still
today.
C
So,
based
on
that,
I
worked
to
prepare
the
four
questions
I
recently
prepared.
To
be
honest,
so
here
you
have
the
new
link.
The
idea
of
this
v4
was,
of
course,
to
go
over
all
the
document
and
try
to
again
consolidate
it.
C
If
we
know
somebody
is
maybe
an
expert
in
this
area,
maybe
you
should
reach
which
to
help
us
to
elaborate
the
document
and
so
on.
So
we
try
try
order
to
identify
some
people
to
to
to
help
us.
So
this
is
what
has
been
done
from
this
v-formation.
C
I
will
go
a
bit
more
into
the
details,
so
here
was
the
initial
list
of
challenge
that
we
have
that
you
had
in
v,
three
so
version
three,
so
maybe,
as
you
remember,
we
tried
to
categorize
a
bit
of
challenges.
We
had
four
criterias.
One
is
more
related
to
problems
that
relate
to
the
ai
technique
itself,
saving
other
world.
It's
when,
for
example,
we
need
to
really
work
on
the
algorithm
of
method
api
to
fulfill
the
needs
of
a
network
management
problem,
a
challenge
we
have.
C
We
have
identified
a
lot
of
problems
looking
to
data
access
to
data
or
to
present
data
in
a
relevant
way
for
of
needs.
We
have,
although
seen
that
there
is
actually
one
particularity
of
our.
Maybe
our
domain
is
that
we
also
want
to
you.
Many
cases
want
to
use
ai
not
only
to
to
to
predict
some
value
and
so,
but
also
to
really
take
decision
or
at
least
guide
or
help
in
decision
or
take
actions.
C
Does
this
make
a
bit?
The
focus
is
the
constraint
a
bit
different,
and
also
we
had
a
lot
of
discussion
at
regarding
accessibility,
why?
We
should
why?
What
would
be
the
the
obstacle
to
to
use
ai
for,
let's
say
operating
networks,
a
lot
of
issue
we
are
used
to
to
have
all
procedures.
We
cannot
also.
We
don't
want
to-
let's
say
let
an
ai
automatically
around
the
network
and
so
forth.
A
lot
of
discussion.
Of
course,
a
lot
of
we
had.
C
We
had
this
particular,
let's
say
criteria
of
what
I
call
it
a
challenge,
but
I
know
that's
not
a
good
term
anyway.
So
we
have
this
list,
as
you
can
see
here,
are
challenges
from
like
the
yti
data
management
and
so
on.
I
will
not
go
for
each
of
them.
Of
course
it's
not
the
goal
here,
but
after
the
reviews,
what
I
call
here
the
review.
C
Basically,
the
review
of
the
three
from
b3
to
v4
is
that
we
just
observe
that
many
of
these
challenges,
which
are
the
let's
say
the
first
column
here
mostly,
are
related
to
a
single,
let's
say,
main
problem,
or
let's
say
with
a
of
course.
For
instance,
you
can,
it
can
be.
The
challenge
may
be
somewhat
due
to
some
primary
data,
maybe
ai
techniques,
but
anywhere
we
put
that
it's.
It
was
still,
as
you
can
see
here,
we
tried
to
somehow
evaluate
each
other.
C
It
was
all
quite
very
focused
and
it
does
not
really
make
sense
to
to
try
to
to
let's
say
to
to
first
the
challenges
to
be,
let's
say:
multi
criteria.
Maybe
it's
good
just
to
say
we
focus
on
what
is
the
main
criteria
that
characterize
the
challenges
and
we'll
try
to
organize
a
bit
of
document
regarding
that
and,
although,
for
the
let's
say,
ai
for
anim
actions,
actually
it's
mostly
related,
maybe
to
the
ai
technique
that
would
run
behind
it's
some
kind
of
a
sub.
Let's
say
a
sub
level
of
the
ai
category.
C
So
this
was
the
our
understanding
after
this
between
this,
let's
say
internal
reviews
that
we
did
and
so
very
key
it.
It
turns
into
a
new
talk,
a
new
table
of
contents
that
you
can
see
here
on
the
left
and
on
the
right
side.
Is
there
a
previous
tag?
C
What
is
really
important
to
see
here
is
that
in
previous
document
on
the
free
version,
we
got
this
list
of
challenges
which
were
actually
a
big
table
where
we
have
one
rule
per
changes
we
described
and
though
we
just
really
a
structure
a
bit
based
on
the
different,
let's
say
categories,
so
more
morality
to
the
ai
techniques
that
we
will
need
to.
That
needs
to
be
extended
to
be
worked
on
and
for
network
management,
and
here
you
see
a
list
of
five
programs
that
comes
basically
to
the
to
the
challenge
that
you
had
before.
C
One
refers
to
the
problem
of
how
we
can
precisely
define
a
network
management
program
to
be
able
to
find
the
right
technical
right
set
of
techniques
we
could
use.
I
mean
ai
techniques
you
could
use
to
to
help
us
one
is
regarding.
Oh,
we
evaluate
the
preference
of
produce
model
not
only
from
an
ai
perspective
from
if
we
include
some
net.
C
Let's
say
network
specific
metric
into
the
a
algorithm
itself,
and
there
is
all
we
can
email
ai
include
network,
or
we
can
use
ai
to
really
the
challenge
of
using
ai
for
planning
actions
for
operating
network
not
only
distributed
ai.
I
will.
I
will
go
back
to
this
a
bit
a
bit
after
because
we
don't
have.
Actually
we
have
it
just
as
a
placeholder
now,
but
we
don't
have
a
challenge
really
well
described
here.
C
Then
we
have
all
the
things
related
to
the
data
say
that
I've
driven
the
data
in
ai
that
we
need
maybe
specific
data
in
our
case
or
we
collect
data.
This
is
also
program.
It's
not
only
on
using
data
to
get
data
to
extract
knowledge
to
share
data.
Maybe
these
are
all
of
that
now
included
in
this
part,
and
then
we
have
the
acceptability
of
ai,
how
we
can
explain
what
has
decision
of
network
a
products
or
we
can
ensure
that
when
we
have
an
ai
or
prototype
working
in
the
lab
environment,
we
can.
C
C
Then
you
can
see
that
we
keep
as
the
main
section
of
the
section
that
will
describe
some
very
let's
say
what
you
call
difficult
problems
that
you're
having
network
mirrors
that
could
rely
on
ai,
not
not
always,
but
that
is
that
could
be
a
good
good
potential
problem
that
you
should
investigate
with
ai.
C
But
what
you
can
see
what
you
can
see
here.
That's
also
some
other.
Let's
say
this
structure
has
a
bit
simplified
to
keep
worries
or
focus
on
the
equipment
on
the
challenges
itself.
So,
for
instance,
except
of
course
what
I
call
what
I
have
described
now
with
the
respection
of
the
challenge.
C
We
have
removed
some
parts,
we
don't
have
any
more
use
case
parts
so
to
maybe
to
recall
it
has
never
been
an
objective
for
this
document
to
list
some
use
cases
and
to
have
to
detail
some
use
cases,
and
we
really
insist
before
to
not
fill
out
this
section
before
we
are
satisfied
with,
let's
say,
a
description
of
challenges,
and
I
think
for
now
it
we
could
even
skip
this
part.
We
don't
have
need
to
have
this
use
cases
at
that
time.
C
I
think
to
have
so
the
id
now
isn't
about
to
go
to
a
more
to
reach
a
let's
say,
a
first
level
version.
We
don't
need
use
cases.
We
have
a
lot
of
content
that
which
I
think
is
really
valued
and
very
valuable.
C
So
we
assume
that
we
should
keep
at
least
for
this
first
level
version
of
norfolk
use
on
the
challenges
itself.
C
Anyway,
we
have
some.
Let's
say
when
it's
got
challenge.
We
give
some
illustrative
use
case
of
application.
Of
course
it's
not
detail
use
cases.
So
we
still
in
the
document.
It's
not
completely.
It's
a
theoretical.
We
still
provide
some
some
example
to
highlight
to
show
to
explain
the
different
challenges,
so
we
still
have
some
more
in
line
with
the
challenges,
but
not
as
really
detailed
use
cases
with
detailed
procedures
and
so
on.
It's
something
that
we
think
we
don't
want
now
and
also
for
directional
recommendations.
Somehow
this
will
be
a
kind
of
another
document.
C
C
Okay,
of
course,
we
have
known
an
introduction
that
the
idea
is
to
show
that
we
that
the
ai
we
cannot.
We
cannot
say
we
will
not
use
ai
for
network
management.
It
does
not
mean
that
we
will
use
ai
for
everything
network
management,
but
somehow
it's
something
that
we
cannot
avoid,
that
we
need
to
have
it,
and
that
is
we
highlight
that
with
some
let's
say
program
that
very
briefly,
and
because
we
have
a
dedicated
section
for
that,
we
try
a
little
bit
to
disambiguate
between
ai
and
machine
learning
here.
C
The
issue
that
most
of
challenges,
to
be
honest,
were
written
with
machine
learning
in
mind.
So
personally,
I
try
a
bit
to
make
it
more
generic.
It's
not
always
easy,
because
maybe
some
are
really
still
very
machine
learning
oriented.
I
think
it's
not
a
problem,
but
at
least
nutrition
try
to
a
bit.
Let's
say.
C
Disambiguate,
obviously,
that
we
have
it's
not
only
about
machine
learning,
there
are
so
much
homogeneity
challenges
that
can
be,
let's
say,
applied
to
different
ai
fields
or
there
are
more,
let's
say,
oriented
to
machine
learning.
So
we
try
to
I
like
that.
I
think
this
is
not.
We
are
still
reviewing
this
part.
To
be
honest,
and
I
hope
that
we
come
up
with
a
nice
nice
nice
version.
Then.
C
We
have
this
section
about
the
difficult
products
in
network
management,
so
previously
we
have
a
list
of
basically
bullet
points.
We
have
a
lot
of
problems
that
I
think
it
was
covered
in
documents,
so
everyone
put
bullet
points
and
then
we
try
something
to
organize
a
bit
this.
This
is
a
difficult
problem,
so
I
didn't
have
to
give
an
exhaustive
list
again:
here's
to
give
some
examples
that
we
could
use.
Maybe
when
you
describe
channel
and
then
so,
we
try
to
categorize
them
according
to
five
five
criteria.
C
Here,
as
you
can
see,
for
example,
one
is
about
the
very
large
solution
space
this
is.
Can
this
can
help
to
characterize
a
difficult
program?
Some
is
related
to
uncertainty,
appear
on
unpredictability
of
what
will
be
the
environment
or
the
context.
Your
assertion
will
be
applied.
Some
can
be
some
or
guided
by
the
need
to
deliver
solution
in
real
time,
or
there
are
a
problem
that
already
depending
on
data,
of
course,
when
maybe
you
will
need
to
analyze
atr
from
different
problems,
and
maybe,
if
I'm
not
I'm
not
personally,
a
very
fan
of
that.
C
If
that
is
about
the
the
need
to
be
intermittent,
you
mind
processing,
I
think
this
is
not
really
a.
This
is
not
a
something
that
comes
directly
from
from
the
problems
you
want
to
tackle
that
you
need
to
be
integrating
the
human
process
just
because
it
comes
from
the
procedures
that
you
are
not
the
problem
that
you
want
to
take.
So
this
is
something
that
is
more,
let's
say,
a
constraint
that
we
had
on
top
of,
or
let's
say,
our
solutions,
but
the
problems
you
want
to
to
achieve.
C
For
example,
here
we
have
some
example,
then,
below
like
a
computation
of
optimal
classification
of
network
traffic.
If
you
don't
need
humans,
it
should
be,
should
be
something
possible,
of
course,
if
it's
not
always
the
case,
but
it's
not,
let's
say
a
constraint
that
comes
directly
to
the
problem
to
to
so,
of
course,
now
we
have
only
two
problems
which
are,
let's
say
described
in
a
more
let's
say,
a
textual
way.
C
C
So
what
remain?
What
mostly
remain,
because
I
would
say
that
there
is
the
first
issue
regarding
the
humay
in
the
loop
challenges
you
see.
The
current
description
is
very
lightweight
it's.
What
is
what
is
in
italic
here?
So
I
is
the
question
is:
should
we
keep
it
or
should
we
some
or
omitted
in
the
document?
Okay,
they
said
we
will
not.
C
This
is
the
goal
is
not
to
provide
a
full
list
and
exhaustive
list
of
challenges,
somehow
somebody's
looking
to
come
out,
nor
these
are
new
organizations
for
each
let's
say
big
category.
We
have
a
kind
of
introduction
for
each
challenge
related
to
ai
technique.
We
have
an
introduction
trying
to
give
a
bit
the
scope
of
this.
All,
let's
say
sub
challenges
and
the
let's
say
your
human
subcharge
will
be
part
of-
let's
say
ia
techniques,
probably
that
that's
most
integrated
as
a
human,
so
it
can
be
somewhat
in
putting
the
introduction.
C
C
From
my
perspective,
I
think
we
we
have
a
lot
of
discussions.
There
are
a
lot
of
contributing
distributed
ai,
but
we
didn't
really
use
the
document.
I
think
it
would
be
essential
for
us
so
being
in
my
opinion.
We
should
need
to
to
a
bit
more
despite
this
part
as
well.
C
Of
course,
then
you
have
a
lot
of
other,
let's
say
editorial
parts.
We
need
to
complete
introduction
conclusion
and
so
on.
We
need
to
reference
and
so
on,
so
the
ideas
that
were
then
to
I
know
that
I
already
promised
a
bit
before
to
transfer
these
google
documents.
Let
me
give
you
a
draft
something
that
is
just
in
my
mind.
I
hope
that
you'll
be
able
to
to
try
this
music
document,
and
I
created
with
everything
really
for
me.
C
Also,
all
to
progress
now
so
to
know
we
have
lost
a
lot
of
contributors.
It
was
really
great.
We
had
a
lot
of
the
id.
No,
I
I
would
like
to
go
with
a
smaller
equatorial
team
to
be
with,
maybe
more,
let's
say,
a
focus
team
really
working
on
the
world
document,
not
only
on
specific
challenge
to
to
totally
do
this.
Let's
say
to
do
all
the
editorial
paths,
to
all
music
elements
and
so
on
and
to
I
think
it
would
be
very
important.
C
So,
as
we
already
discussed,
we
can
put
all
contributors
out
towards
the
draft
so
that
we
need
also
to
to
to
to
to
restrict
the
list
to
some
this
low
number
of
people
and,
of
course,
all
contributors
will
be
listed
as
contributors
and
acknowledgement
sections
for
sure.
C
After
the
review
of
the
document,
this
is
a
small
dot.
Our
team
will
really
be
an
important
challenge.
We
will
make
some
change
and
hopefully
mid-may,
will
be
able
to
deliver
nice
documents
that
we
got
for
review
from
everybody,
but,
of
course,
as
a
link
is
open.
If
anyone
is
already
interesting
to
give
you
to
give
us
some
comments,
it's
really
it's
really
open,
so
do
not
hesitate
and
to
already
write
comments
before
and
yeah.
That's
it
for
me
any
comments
or
questions.
B
B
I,
like
your
approach,
what
I'm
for
the
document
itself?
What
what
do
you
intend
to
describe
really
a
series
of
problems?
I
mean
what
will
be
the
focus
of
the
of
this
section
and
document.
It
will
be
more
on
the
criteria
or
it
will
be
more
on
trying
to
find
some
problems
that
that
have
the
different
criteria
c1
to
c5.
B
Just
to
understand,
because
I
think
see
the
criteria
are
quite
interesting
and
pretty
I
mean
transverse
finding
problems
that
have
many
of
the
criteria
etc.
B
We
will
be
kind
of
going
again
into
not
use
cases,
but
our
problem
is,
and
then
it
becomes
a
bit
the
question
which
one
to
select
to
be
to
be
signific,
significant
or
representative
of
network
management
problems.
C
Okay,
so
actually
it
was
a
bit
in
kind
of
let's
say:
reverse
engineering,
this
criteria
it
was
based
off
what
has
been
already
provided
in
terms
of
problems
so
yeah.
The
idea
of
using
this
criteria
is
just
to
help
us
to
to
try
to
somehow
have
yes,
I
have
problems
that
fulfill
different
materials.
Many
criterias.
C
The
idea
of
the
document,
of
course,
is
not
the
description
of
the
problem
itself,
but
if
you
have
some
some
problems
that
are
used
and
in
the
chinese
description
we
have
already.
Actually
we
have
many
many
people
and
they
contribute
to
describe
challenges
and
they
give
an
example,
for
example,
and
so
the
idea
that
we
should
take
this
example
and
put
them
in
the
first
section
and
try,
of
course
this
example,
because
many
examples
are
similar,
so
they
are
quite
transversal
and
try
to
show.
C
Why
is
why
this
why
these
problems
are
very
hard
and
we
actually
yeah
this
criteria,
should
that
should
have
to
understand
why
they
are
very
hard
and
then
for
each
section.
You
can
refer
to
this.
To
this,
let's
say
big
problem
when
you
describe
why
this
is
a
challenge,
for
example,
I
don't
know
to
use
a
lightweight
ai
in
network,
for
example,
because
we
have
we
need
some
out
to
the
reverse
relation
constraining
determinative
type.
So
we
need
the
ai
and
working
at
like
minutes
we
put
so
we
put
so
so.
C
D
C
C
I
think
there
is
no,
let's
say,
let's
say,
limit
of
challenges
which
what
we
could
put
some
more
for
the
range
of
management,
of
course.
So,
yes,
we
have
challenges,
maybe
to
resolve
some
some
problems
that
you
have
already
today
or
maybe
we
want
to
integrate
ei
to
solve
new
channels.
Some
new
some
new
problems
will
have
in
network
management
so
that
you
know
we
have
already
so
this
is
not.
C
Let's
say
it's
not
this,
let's
say
fixed
and,
of
course,
if
you,
if
you
need
to
extend
it
some
solution
with
ai
yeah,
it's
also
kind
of
a
challenge
that
we
should
highlight,
but
it
must
be.
I
like
it
not
not
from
a
single
problem
perspective,
but
more
globally.
D
So
so
maybe
I'm
just
thinking
is
there
some
criteria
that
is
talking
how
to
evolve
the
existing
solutions
and
how
to
integrate
the
ai
ml
into
the
existing
solutions?.
C
I
think
this
is
part
actually
of
the
acceptability,
which
is
you.
You
know
it's
acceptability
main
challenges
and
then
you
can
accept
challenges.
Maybe
this
is
a
good
challenge
to
have
here,
because
if
you
have
time
to
look
at
the
document,
I
try
to
to
add
this
id
that
we
have
already
procedures.
We
have
already
solution.
We
cannot
just
say
from
the
from
from
today
to
tomorrow,
which
change
everything
we
need
to
to
some
way
integrate
incrementally
ai.
C
This
is
something
that
I
try
to
to
make
it
appearing
in
the
let's
say:
acceptability
challenges,
and
then
you
have
some
challenges,
but
here
in
the
second
one,
which
is.
C
In
production
system,
actually
this
is
we
move
from,
let's
say,
yeah
lab
solutions
to
solution
in
in
the
real
real
network,
let's
say
and
yeah.
Yes,
there
are
some
some
challenges
and
we
try
to
to
figure
out
what
could
be
the
orientation
as
well.
Of
course,
I
think.
Maybe
we
then
have
questions.
Also,
so
is
there
some
you
know
session,
and
then
we
will
go
to
so
digital
twins.
That's
why
I
think
it's
also
a
possibility.
C
A
research
and
digital
should
help
to
integrate
ai
incrementally
into
production
system,
because
you
test
it
in
yeah
digital
twins.
So
this
is
something
that
appears
as
well
here
so
yeah
your
mark
is
fully
valid.
So
if
you
are
interested
by
this
I
mean
you
can
focus
on
this
section
and
you
can
give
me
some
feedback
if
you
want,
I
will
be
very
happy
to
take
you
to
just
to
know
if
this
make
this
is
I
like
what
you
have
said
before
that?
C
B
We'll
be
on
natural,
digital
twins,
and
the
first
presentation
will
be
from
jordy
and
alvarez
hope
to
build
a
digital
twin
albert.
You
want
to
present,
I
will
present
and
you
just
tell
me.
I
want
to
switch
right.
Okay,
yeah,
okay,.
E
E
Okay,
so
I
will
start
so
thanks
for
I
love
for
inviting
me
for
for
this
talk.
I
hope
that
you
find
it
interesting.
So
what
we're
going
to
talk
is
how
you
can
build
a
digital
twin
and
we
will
compare
a
bunch
of
technologies
and
we
will
try
to
understand
what
are
the
pros
and
the
the
cons.
So
next
slide
please.
E
So
this
is
a
digital
twin.
I
think
that
I
don't
need
to
describe
further
the
concept
right.
It
is
a
digital
representation
of
a
networking
infrastructure
and
it
has
been
documented
already
in
drafts
and
many
papers
and
instead
of
focusing
on
on
what
is
the
digital
twin,
we
will
focus
more
on
how
we
can
build
it.
So
next
slide,
please
so
next
one.
E
So
I
think
that
the
the
first
question
we
need
to
answer
ourselves
whenever
we
want
to
discuss
the
digital
twin
and
specifically
when
we
want
to
discuss
how
to
build.
It
is
what
are
the
inputs
and
the
outputs
it's
very
hard
to
have
a
meaningful
discussion
on
the
digital
twin
on
the
network,
digital
twin.
If
we
don't
first
of
all
discuss
what,
if
it's
a
box
right,
if
we
agree
that
it's
a
box,
then
it
has
some
inputs
and
some
outputs
and
that's
the
very
first
thing.
E
E
So
in
order
to
answer
the
question
how
we
can
build
a
digital
twin,
we
have
decided
to
go
through
for
these
inputs
and
outputs,
which
are
as
inputs
we
have
the
network
configuration
which
means
this
is
my
network
topology.
I
have
this
type
of
cues.
I
have
this
type
of
scheduling
policies
like
stick
priority
or
weighted
weighted
circuiting.
I
have
this
routing
protocol.
I
have
an
overlay
routing
protocol
with
segment,
routing
and
so
on.
That's
the
network.
E
Configuration
for
the
traffic
load
is
exactly
which
is
what
are
the
packets
that
are
entering
into
my
network
right?
How
many
users
I
have,
which
type
of
traffic
they
send
do
they
send
voiceover
ip
within
demand
or
data
traffic
for
backup
traffic
and
so
on?
So
those
are
the
two
inputs
and
at
the
output,
what
we
propose
is
that
it
is
the
resulting
network
performance.
E
So
if
I
have
a
network
which
has
this
particular
configuration
with
that
topology
and
this
particular
router
equipment
and
switches,
and
so
on-
and
I
load
it
with
this
particular
traffic,
this
is
the
performance
that
I
will
get
and
the
performance
can
be
measured
through
what
will
be
the
delay
of
the
flows
from
the
users?
What
will
be
the
delay
of
the
voice,
video
on
demand
traffic?
How
many
losses
I
will
have
in
my
network?
What
will
be
the
link
utilization?
What
will
be
the
utilization
and
so
on?
E
It
is
very
important
to
note
that
we
are
not
claiming
that
those
are
the
right
inputs
and
outputs.
Those
are
the
ones
we
are
considering
in
my
group
to
answer
the
question:
how
we
can
build
it
and
to
see
which
are
the
challenges
ahead
and
they
are
relevant
for
the
sake
of
the
discussion.
I
think
that
those
inputs
and
outputs
can
be
challenged
and
we
can
have
a
discussion
on
whether
those
are
not
the
right
ones,
and
maybe
there
are
better
ones.
E
Then
how
why
these
inputs
and
outputs
make
sense
at
least
to
me,
because
if
you
take
the
a
real
network
infrastructure
and
you
assume
it
as
a
box
also,
that's
super
stupid.
What
I'm
going
to
say?
Okay,
because
you
guys
are
running
networks
and
building
them,
but
let
me
explain
them
from
this
angle
at
the
end,
a
network
if
we
assume
that
that's
a
transient
network,
what
you
have
is
data
packets
that
are
getting
into
the
network
and
data
packets
that
are
getting
out
the
network
right.
E
If
that's
that's
a
transiting
network,
so
we
have
traffic
going
from
ingress
to
egress.
Now
these
traffic
are
packaged,
but
at
the
end
they
can
be
voice,
traffic,
video
traffic
and
so
on.
And
then
you
have
some
sort
of
administrator
which
can
be
a
network
management
platform
or
a
controller
which
is
applying
a
configuration
to
the
network.
Okay
or
the
network
is
already
configured.
The
configuration
is
everything
that
it
is.
E
You
know
on
the
configuration
file
of
each
and
every
network
device
and
then,
if
you
have,
if
you
assume
that
you
have
some
sort
of
telemetry
platform,
what
you
have
is
a
performance
metrics,
so
you
can
measure
okay.
This
is
the
delay
for
this
type
of
flows.
This
is
my
utilization.
This
is
my
losses.
This
is
my
link
utilization,
okay.
So
this
is
how
we
see
a
network.
Of
course,
not
all
some
networks
consume
and
produce
traffic.
E
E
You
are
applying
exactly
the
same
configuration
to
the
performance
network,
digital
twin,
exactly
the
same
as
you
have
in
the
real
network,
but
instead
of
using
the
real
traffic
to
test
the
net,
the
network
digital
twin,
you
input
a
description
of
the
traffic
okay
because
it's
not
a
real
network,
so
you
don't
put
packets
into
the
digital
twin.
But
you
put
a
description
on
how
this
packet
looks
like
how
many
flows
do
I
have
how
many
voice
of
video
on
demand
flows
I
have,
which
is
the
rate
for
this
video
on
demand
flows?
E
How
many
users
I
have,
which
is
the
the
dynamic
behavior
temporal
behavior
of
the
users,
because
maybe
at
night
I
have
more
traffic
than
during
the
day,
because
that's
a
residential
network
and
so
on.
So
those
are
the
two
inputs
to
a
network:
digital
pin,
okay
and
then
the
output
is
are
not
the
package
because
we
are
not
putting
packets
into
the
digital
pin.
What
we
are
putting
is
a
description
of
the
traffic
and
what
we
get
is
a
description
of
what
will
be
the
performance
of
that
network.
E
If
of
for
that
traffic
with
this
particular
configuration
right
so
again,
so
it's
for
the
the
network,
this
is
doing
is
simulating,
let's
say
a
network
which
has
what
has
been
configured
with
this
configuration
has
this
type
of
traffic
and
then
it
will
tell
you
okay,
this
is
the
performance
you
will
get
okay,
so
next
slide,
please
so
next
one.
E
So
I
don't
want
to
discuss
which
are
the
use
cases
of
a
performance
digital
twin.
I
don't
think
that
that's
the
focus
of
this
presentation,
but
I
think
that
it
is
very
hard
to
to
to
have
a
meaningful
discussion
on
how
you
can
build
something
if
we
don't
see
why
that's
relevant
right.
So
let
me
go
super
quickly
over
a
few
use
cases.
E
E
If
I
change
this
right,
if
I
don't
have,
if
the
autonomous
system
does
not
have
a
digital
twin,
that
will
tell
okay,
if
you
do
this,
the
performance
will
be
bad.
So
don't
do
it
it's
very
hard
for
your
autonomous
system
to
decide
what
to
do
right.
So,
basically,
that's
that's.
That's
that's
the
main,
the
main
goal
of
our
performance,
the
network,
digital
twin.
So
here
we
have
a
set
of
use
cases,
but
on
the
paper
listed
below
you
have
way
more
use
cases
you
can
check
them.
E
For
instance,
you
can-
and
I'm
assuming
here
that
this
digital
twin
is
deployed
with
a
traffic
telemetry
platform,
I'm
a
network
management
platform
which
are
not
easy
to
implement
or
deploy,
but
I
assume
that
they
are
already
there
and
that's
out
of
the
scope
of
my
presentation,
but
you
can
answer
questions
such
as
what?
If
so,
let's
say
that
I'm
employed
in
a
company,
I'm
running
the
network
and
my
company
is
thinking
about
acquiring
another
company.
So
I
can
ask
my
network
digital
twin.
E
E
I
have
a
5g
core
and
I
have
a
backup
5g
core,
but
what
will
what
will
be
the
impact
in
the
users
if
the
if
one
of
the
5g
cores
fails
and
all
the
users
need
to
be
redirected
to
the
backup
one?
What
will
be
the
session
establishing?
E
What
will
be
the
impact
on
the
session
establishment
time
and
so
on?
Or
you
can
ask
questions
such
as
okay?
Can
I
support
new
user
slas
with
exactly
the
same
resources
I
do
have
or
do
I
need
to
buy
new
network
equipment,
or
maybe
I
need
to
upgrade
the
link
okay.
So
this
is
a
set
of
use
cases.
You
have
more
on
the
on
the
paper
if
you
are
interested
so
next
slide,
please,
okay!
E
So
now
we
have
at
least
we
we
will
build
a
performance
network,
digital
twin,
and
then
we
can
discuss
how
we
can
build
it.
E
Okay,
so
next
slide
so
in
in
what
we
have
built
is
a
performance
digital
twin
as
I
was
explaining,
and
to
be
more
specific,
the
configuration
that
we
assume
is
we
support
topology,
okay,
which
is
a
topology
of
the
network,
we're
assuming
a
fixed
network
with
switches
and
routers,
which
is
the
link
capacity,
which
is
the
routing
that
you're
using
on
your
network,
whether
we
support
overlay,
routing
like
services
or
mpls
or
lisp,
and
we
support
underlay
routing,
also
like
very
like
ospf
and
pcp,
whatever
you
are
using
there.
E
What
are
the
scheduling
policy
you
are
using
any
arbitrage
scheduling
policy
will
work
like
strict
priority
weighted,
securing
deficit
run
robin
and
so
on
the
q
length
and
then
other
features
such
as
ecmp
lag
and
so
on.
I
know
that
some
of
these
features
are
very
old
and
and-
and
our
goal
is
to
academically,
show
that
whether
this
can
be
built
then
the
real
features.
I
think
it's
up
to
discussion
for
people
that
have
way
more
knowledge
on
how
the
industry
works.
E
Okay
and
the
traffic
load,
as
I
was
saying,
you
have
the
traffic
matrix,
which
is
how
much
how
much
bandwidth
I
have
from
one
ingress
point
to
one
egress
point
and
then
also
we
support
flows,
meaning
that
we
are
assuming
that
the
performance
network
digital
tune
will
support
flows,
meaning
that
we
have
this
amount
of
flows.
That
start
here
and
here
and
then
each
flow
has
a
different
type
of
traffic
like
voiceover
ip
video
on
demand
web
and
so
on,
and
we
don't
need
that
these
flows
are
described
as
five
tuple.
E
We
support
any
level
of
variety,
so,
for
instance,
flow
can
be
from
from
english
to
egress,
or
you
can
assume
any
level
of
variety
and
and-
and
this
will
will
support
it.
Okay,
so
next
slide.
E
So
now,
let's
build:
let's
try
to
build
this
box
with
a
simulator
okay.
So
next
slide
please
so
here
what
we're
doing
is
we're
taking
this
box.
Okay
and
this
box
is
actually
a
simulator
okay,
it's
running
c
code
or
whatever
language
you're
using
and
it's
implemented
using
a
simulator.
So
next
slide.
E
Okay,
so
we
have
actually
we
did
that
and
for
that
we
use
the
obnet
plus
plus
simulator.
That's
that
discrete
event
simulator
with
basically
a
discrete
event
simulator.
What
does
it
simulates
the
propagation
transmission
and
forwarding
of
each
and
every
packet,
and
actually
the
forwarding
of
each
and
every
packet,
is
what
is
considered
inside
the
simulator
is
a
discrete
event.
So,
basically,
a
simulator
is
a
code
which
is
which
takes
all
these
events,
and
you
know,
goes
through
all
of
them
and
tries
to
understand
what
happens.
E
Discrete
event.
Simulators
are
very
well
known
in
networking.
There
are
way
more
discrete
object
is
just
one
of
them.
There
are
them
and
network
simulator
two
and
three
g
and
three
cisco
packet,
three
seven.
So
there
are
a
bunch
of
them,
but
all
of
them
they
work
under
the
same
principle,
which
is
they
simulate
what
happens
with
each
and
every
packet.
So
if
you
take
one
simulator
and
you
try
to
build
this
kind
of
box,
you
will
find
that
the
accuracy
is
very
good.
E
Now
what
is
accuracy-
and
this
was
one
of
the
questions
that
jerome
was
sending
to
the
list
right,
but
I
couldn't
see
here
I
mean
if
I
take
a
real
network
and
I
apply
a
configuration
and
I
I
load
it
with
a
certain
amount
of
traffic
and
I
measure
in
the
real
network.
What
is
the
delay,
for
instance,
for
a
particular
flow,
and
then
I
do
the
same
with
a
simulator,
the
difference
between
the
the
real
delay,
measured
at
the
network
at
the
real
network
and
the
delay
measured
at
the
simulator.
E
That's
what
I
call
accuracy:
okay
and
the
error,
the
higher
the
error,
the
the
worse,
the
the
the
less
accurate
is.
The
simulator
is
this
the
technology,
so
accuracy
is
very
good
in
a
simulator.
Typically,
you
get
perfect
accuracy
or
almost
perfect
accuracy,
but
what
about
the
simulation
time?
So
next
slide
please.
E
So
what
happens?
Is
that,
although
simulators
are
very
accurate,
the
problem
is
that
the
time
it
takes
for
a
simulator
to
simulate
a
network,
it
scales
linearly
with
the
amount
of
packets,
okay,
which
are
discrete
events
at
the
end
of
the
simulator,
and
if
you
think
about
how
a
simulator
works.
This
is
exactly.
This
is
exactly
right.
So,
for
instance,
just
to
give
you
an
idea,
one
billion
packets
takes
11
hours
in
a
quite
busy
computer
and
one
one
billion
packets
is
roughly
equivalent
to
one
minute
of
a
single
10
gigabits
per
second
link.
Okay.
E
So
if
you
want
to
simulate
one
minute
of
attenuate
per
second
link,
this
will
take
you
11
hours
in
a
quite
busy
computer.
So
of
course,
it
is
impractical
to
simulate
a
real
network
which
have
tens
of
links
which
are
even
high
with
higher
capacitance
than
10
gigabytes
per
second.
It
will
take
you
a
week
to
do
that.
So,
although
simulators
are
very
accurate,
they
are,
they
have
a
huge
computational
cost
and
they
are
not
practical.
E
Because
any
question
you
ask
to
the
digital
twin,
it
will
take
weeks
to
answer
that
and
actually,
if,
if
you
want
to
simulate
a
real
real
network
with
real
link
capacities
and
real
traffic,
it's
not
even
weak,
it's
probably
more
okay
and
you
are
spending
huge
amount
of
computational
cost.
So
that's
why
I
believe
that
using
a
simulator
for
that
is
not
practical.
So
next
slide,
please!
E
Okay!
Now,
let's
go
to
emulation
so
next
slide.
So
what
is
an
emulator?
So
an
emulator
is,
if
you
take
a
network
right,
a
network
is,
it
is
made
a
real
network.
It
is
made
out
of
two
main
components:
hardware,
which
typically
is
designed
specific
for
networking.
You
have
like
network
processors
and
asics
and
so
on,
which
are
designed
for
packet
processing
and
then
the
software
which
control,
which
is
runs
over
this
hardware
right.
So
an
emulator
is
basically
taking
and
I'm
sure
that
many
of
you
already
know
that.
E
But
let
me
go
super
quickly
through
it.
An
emulator
is
taking
only
the
software
components
of
your
network
and
running
exactly
the
same
software
components
or
as
close
as
possible,
but
instead
of
on
a
specific
hardware
on
a
general
purpose,
cpu
or
in
a
cloud
okay.
So
you
take
exactly
the
same
software
that
you
have
on
your
network
and
you
run
it
on
on
a
general
purpose
cpu.
So
this
is
basically
you
know
like
a
software
router
or
a
virtual
network
function,
and
so
on
so
next
night.
E
So
if
you
build
a
network
digital
tune
using
emulation,
what
will
happen
is
that
you
will
have
very
poor
accuracy,
because,
of
course,
the
delay
seen
by
the
packets,
which
are
going
through
your
emulator
network
will
be
way
lower.
So
the
delay
will
be
sorry.
Higher
will
be
way
higher
than
in
the
real
network,
because
your
emulated
network
is
extremely
slow
because
you
are
not
taking
advantage
of
the
specific
hardware
that
that
your
network
is
using
right
because,
basically,
you
are,
you
are
running
your
network
in
a
general
purpose.
E
Processor,
instead
of
in
single
purpose
processor,
which
is
what
you
have
so
emulation,
will
will
not
be
accurate
right.
The
delay
that
you
will
measure
at
the
emulation
will
will
not
be
accurate
with
respect
to
the
real
infrastructure.
Okay,
this
does
not
mean
that
the
emulation
has
many
many
relevant
use
cases,
and
I
think
that
it
has
many
relevant
use
cases
which
also
are
linked
to
other
type
of
digital
twin,
which
are
not
based
on
performance,
for
instance,
for
training
or
for
debugging
or
for
testing
new
features.
E
E
So
now
analytical
models,
queueing
theory.
So
next
slide.
Now
we
take
a
different
approach
and
instead
of
using
simulation
or
emulation,
we
try
to
see
what
happens
if
we
try
to
build
this
box
with
analytical
models
with
equations.
And
for
this
we
use
queuing
theory.
So
basically,
queuing
theory
represents
our
base
available,
analytical
tool
for
to
model
networks.
Okay-
and
I'm
sure
that
you
remember
something
about
queuing
theory,
probably
from
your
grad
studies,
so
basically
queueing
theory.
E
E
The
application
of
queueing
theory
was
pioneered
by
leonard
kleinrock,
which
is
considered
one
of
the
fathers
of
the
internet,
the
he
laid
the
the
foundational
theory
behind
the
internet
because
he
was
a
pioneer
in
application
of
queueing
theory
to
packet
switches
networks
in
the
70s,
and
that's
one
of
the
reason
why
our
panel
decided
to
to
go
for
packet
switches,
and
this
is
what
we
have
today.
Okay,
so
next
slide.
E
E
So
when
you
do
that,
what
happens
is
that
the
digital
twin
is
super
fast,
meaning
that
when
you
ask
a
question,
you
get
an
answer
super
quickly,
because
at
the
end,
what
the
only
thing
that
you
need
to
run
are
a
set
of
equations
and
that's
computationally,
very
lightweight,
and
it's
very
quick.
E
What
happens
is
that
when
you
try
to
see
how
accurate
is
your
queueing
theory
when
estimating
the
delay,
you
will
find
that
under
realistic
traffic
models,
it
is
not
accurate
at
all,
for
instance,
on
the
plot
on
the
right
we
you
have
on
the
so
you
have
three
different
traffic
models:
auto
related,
modulated
and
multiplex,
and
on
the
y
y-axis
you
have
the
error
so
for
modulated
traffic,
which
I
will
explain
what
it
is
in
a
second.
E
Our
queuing
severity
model
estimates
the
delay
with
a
68
of
error.
Okay,
so
queuing
theory
does
not
work
in
this
case
and
it
estimates
a
delay
which
is
68
of
the
real
one.
So
it
was
not
very
accurate.
E
This
is
not
a
limit,
a
limitation
that
we
discovered
and
at
all
this
is
very
well
known
in
the
in
queueing
theory,
people
working
in
queueing
theory.
They
are
completely
aware
that
queuing
theory
only
works
for
very
synthetic
traffic,
but
not
for
realistic
traffic.
So
it
is
not
an
accurate
solution
for
that.
That's
why
I
believe
that
it
is
not
practical
to
build
a
network
digital
twin
using
qx3,
because
it
is
not
accurate
under
arresting
traffic,
although
it
is
very
fast
as
opposed
to
emulation
and
simulation.
E
So
now,
let's
try
to
use
it
with
neural
networks.
Let's
try
to
build
this
kind
of
system
with
neural
networks,
so
how
we
can
build
this
kind
of
system
with
a
neural
network.
Okay,
so
neural
networks,
all
of
them
they
require
training.
Okay,
that's
the
first
thing
you
need
when
you
want
to
build
something
with
a
neural
network.
So
let's,
let's
take
the
example
of
in
computer
vision,
which
is
an
application.
We
are
already
aware
of
it.
E
Let's
say
that
we
want
to
build
a
system
that
identifies
pictures
of
animals
and
it
is
able
to
tell
us,
okay,
this
picture,
that's
a
dog,
this
picture
that
zakat
and
so
on.
So
the
first
thing
you
need
is
a
training
set,
meaning
you
need
a
set
of
pictures
with
the
the
right
with
the
label,
meaning
that
this
picture
and
that's
the
doc.
E
I
tell
you
I'm
telling
you
that
this
picture
is
a
dog,
that's
a
cat,
so
you
need
the
input
and
the
output
of
the
model,
and
then
this
is
what
the
neural
network
uses
for
training
and
then
after
training.
Then
you
can.
You
can
ask
the
model
a
question.
Okay,
see
this
picture,
which
you
have
never
seen
before
now
tell
me
if
it's
a
dog
or
a
cat
or
an
elephant,
and
it
will
tell
you
so
in
networking
we
can
do
exactly
the
same
thing
right.
E
What
we
can
do
is
recall
that
the
input
labels
of
the
performance
network,
digital
twin,
are
configuration
and
load.
So
what
we
will
do
is-
and
I
know
that
that's
complex,
but
this
is
the
approach
we
are
taking.
Is
we
take
one
network
and
we
say:
okay,
if
I
apply
this
network
configuration
and
I
apply
this
traffic
load,
what
the
performance
will
get.
So
I
get
and
that's
the
first
row
of
my
data
set
okay,
so
input
label
and
output
label.
E
Then
I
take
another
row.
I
will
take
a
different
network
configuration.
I
will
change
the
topology,
where
I
will
change
the
segment,
routing
configuration
or
the
queuing
policy.
I
will
change
the
traffic
and
I
measure
what
is
the
performance
and
that's
the
second
row
of
my
data
set
and
now
I
need
thousands
of
rows,
that's
complex
and
costly.
I
need
thousands
of
rows
and
when
I
have
these
thousands
of
rows,
then
next
slide.
E
So
next
slide,
please,
okay,
then,
once
I
have
thousands
of
rows,
I
can
train
a
neural
network.
Okay,
and
I
will
tell
him
I
will
tell
it
okay,
when
you
see
this
network
configuration
and
this
traffic
load,
that's
a
performance,
and
so
on
that's
training
after
training.
I
will
be
able
to
watch
the
neural
network.
Okay,
now
now,
after
training,
which
is
very
costly,
but
after
training,
then
you
have
a
trained
model
which
is
very
lightweight.
E
I
will
be
able
to
ask
okay
if
I
have
this
traffic
load,
which
you
have
never
seen
in
the
past,
and
you
have
this
network
configuration
which
you
have
never
seen
in
the
past.
What
will
be
the
performance
and
if
I
do
the
training
correctly-
and
I
do
many
other
things
correctly-
the
answer
should
be
accurate.
Okay,
so
next
slide
please.
E
So
this
is
what
we
did
and
and
for
that
we
use
a
particular
type
of
neural
network
which
are
called
graphical
network
which
are
designed
to
learn
information
which
is
structured
as
a
graph.
Okay,
those
are
called
graph
network
depending
on
your
application
in
ai.
You
need
to
use
a
particular
set
of
neural
network
architectures.
Okay,
if
your
information
are
pictures,
you
use
convolutionals.
E
If
your
information
is
like
text
or
voice,
you
use
recurrent.
If
your
information
is
a
structure
as
a
graph
which
is
a
network.
Basically,
you
use
graph
neural
networks.
Okay,
so
we
did
precisely
that
and-
and
we
call
it
langnet-
you
have
a
paper
also
over
there
and
we
we
basically
now
we
are
implementing
the
performance
network,
digital
twin,
with
a
train
gnn.
Okay,
so
we
train
it
and
then
that's
that's
the
performance
network,
digital
twin,
so
next
slide.
E
E
So
it's
a
completely
different
network
and,
as
we
have
seen,
is
that
if
you
train
this
particular
model
in
networks
which
are
20
to
30
routers,
only
the
and
you
ask
after
training
in
inference
after
training,
you
ask
okay,
what
is
the
performance
of
need
of
this
network
which
is
10
times
larger
and
that
you
have
never
seen
in
in
training,
then
the
the
the
error
in
the
worst
case
is
10.
Okay,
when
estimating
the
delay.
So
it's
quite
a
remarkable
accuracy.
E
Also,
the
gnn
is
very
fast.
It's
similar
to
qx
every
so
after
training
training
is
very
expensive,
but
after
training
you
can
ask
a
question
and
the
gnn
will
will
give
you
an
answer
in
100
milliseconds
and
that's
not
unique
to
gnns.
That's
pretty
much
all
the
neural
network,
and
here
we
are
using
off-the-shelf
computing.
But
if
you
use
ai
accelerators,
you
can
drop
this
number
to
10,
milliseconds,
okay,
so
super
quick.
E
E
And
finally,
what
we,
what
we
did
also
was
to
test
what
happens
when
you
have
realistic
traffic
models,
not
not
as
in
queueing
theory,
which
only
works
with
very
synthetic
traffic
models.
But
when
you
have
traffic
models,
which
are
you
know,
similar
to
tcp
or
similar
to
voiceover,
ap
and
so
on,
and
as
you
can
see,
the
the
the
accuracy
is
also
very
good
in
all
the
in
all
the
across
across
the
board
and
the
error
when
estimating
the
delay
is
always
below
10,
and
you
have
all
the
details
in
the
paper
now.
E
Gnns
are
not
freelance
right.
You
need
to
build
a
data
set
and
that's
quite
complex
and
costly,
because
I
was
saying
you
need
to
take
one
network
and
you
need
to
start
trying
configurations,
trying
traffic
and
seeing
what
is
the
performance,
and
this
is
your
training
set
for
the
gnn.
So
next
slide
yeah
next
one.
E
So
that's
a
table
that
hopefully
summarizes
what
I
said
during
my
presentation
right,
so
emulation
will
give
you
poor
accuracy
and
it
is
very,
very
slow.
So
it
is
not
practical
simulation.
The
accuracy
is
very
good,
but
the
the
speed
at
which
you
will
get
answers
is
very
slow.
You
know
it
will
take
hours
to
simulate
one
single
link
in
a
very
busy
computer
queuing
theory.
E
It
is,
it
offers
poor
accuracy
in
realistic
traffic
conditions,
but
it
is
very
fast
to
get
your
answers.
Then
you
have
another
row
which
I
didn't
explain
because
of
the
lack
of
time,
but
we
also
tested
other
type
of
neural
networks
which
are
not
gnns
and
you
have
the
information
on
the
backup
slides.
But,
finally,
what
we
have
found
is
that
with
gns
you
have
very
good
accuracy
and
they
can
give
you
an
answer
very
fast
and
that's
it
from
my
site.
I
don't
know
if
you
have
any
questions
or
comments.
B
We
we
have
luis
first
then
diego.
F
Thank
you
very
much,
everybody
very
interesting
presentation.
I
have
a
three
few
few
questions.
First,
you
mentioned
well,
you
did
it
in
this
cnn
simulation
assimilation,
one
an
analysis
with
20
30
routers.
When
you
comment
about
increasing
the
number
of
routers
there,
do
you
refer
to
the
internals
of
the
network
so
keeping,
let's
say
the
perimeter
to
certain
entries
and
or
also
to
increase
the
number
of
entry
routers
or
rather
entry
traffic
yeah.
F
Oh
and
the
sources-
okay,
I
I
another
short
question:
is
you
mentioned
that
you
can
train
and
you
can
change
the
the
the
different
configurations,
and
so,
but
I
was
wondering,
for
instance,
if
you
introduce
new
behaviors
that
could
be,
for
instance,
preemption
in
the
queues
this.
If
this
has
not
been
trained
before.
E
You
need
to
train,
let
me
elaborate,
because
that's
a
very,
very
good
question,
so
gnns
will
be
able
to
provide
accurate
answers
to
different
values
of
the
input
parameters.
So
if
you
have
a
larger
topology,
if
you
have
a
different
routing
configuration,
but
a
new
feature
like
this
is
a
new
protocol
which
the
gnn
has
never
seen
during
training,
then
you
need
to
retrain
it.
E
Okay,
as
an
example,
as
an
analogy,
if
if
you
have
a
computer
vision
and
you
train
it
for
a
set
of
animals
and
then
yes,
it
will
identify
any
picture
of
that
set
of
animals,
even
if
the
particular
picture
was
not
on
the
training
side.
But
if
there
is
a
new
animal,
you
need
to
retrain
it.
Okay
and
the
very.
F
Last
question
is
with
this:
with
these
techniques
also,
we
could
have
a
view
of
jitter
on
so
not
only
delay
or
throughput,
but
also
yeah.
E
G
No,
it
was
simply
looking
at
here
at
these
and
thinking
that
we
live
in
a
world
of
composition
and
containers
and
lambdas
and
all
the
like.
Don't
you
think
that,
probably
in
many
in
most
cases
or
in
most
practical
cases,
a
digital
stream
will
be
a
combination
of
several
I
mean
if
you
are
not
very
much
concerned
about
accuracy,
but
you
want
to
have
a
an
approximate
value,
probably
using
I
don't
know
theory
but
for
a
certain
part
for
the
core.
G
E
E
D
E
E
But
but
this
is
the
kind
of
discussion
I
think
it
has
value
right,
because
when
you
start
thinking
about
okay,
what
is
my
output?
What
I
want
to
do
I
want
to
predict
the
delay.
Then
the
discussion
is
more
easier
to
have
because
we
are
down
to
the
graph
and
and
that's
fine
if,
if
those
inputs
and
outputs
are
not
the
right
ones,
I
don't.
I
don't
think
that
maybe
they
are
not
the
right
ones.
But
when
we
have
the
inputs
and
outputs,
then
discussion
is
way
easier.
H
So
I'm
born
with
less
so.
Thank
you
very
much
for
this
presentation.
I've
been
waiting
for
it
for
quite
some
time,
because
there
is
quite
a
mix
of
you
know,
emulation
versus
simulation
versus
analytics,
etc.
So
I,
like
it
very
much
so
okay.
The
conclusion
is
that
your
graph
neural
network
is
fast
great,
accurate.
H
Well,
maybe
because
in
the
end
we
deal
with
routers
that
have
hardware
limitations
that
have
sometime
bugs
that
have
their
own
limitation
depending
on
how
they're
connected.
So
my
message
is:
I'm
wondering
if
we're
not
trying
to
solve
everything
with
digital
twin,
in
a
sense
that
okay,
digital
twin,
is
like
a
copy
of
your
network.
H
Now
we
speak
about
performance
network,
digital
twin
and
I'm
wondering
how
far
we
could
go,
because
you
know
you
mentioned
the
four
metric
that
you've
got
network
utilization.
Well,
if
you
put
more
flows
in
there
sure
it's
easy
to
deduce,
then
there
is
like
a
delay
delay.
We
could,
as
you
mentioned
it's
additive,
that's
a
great
property,
so
we
could
get
it
with
a
high
level
accuracy
if
we
go
into
packet
loss.
H
Well,
it's
not
to
be
very
hardware
independent,
artwork
dependence
right
if
we
go
with
jitter,
as
you
mentioned,
that's
the
most
complex
one
because
well
we
cannot
add
it,
so
I
actually
have
to
test
it.
So
I
slowly
arrive
at
the
conclusion
that
maybe
digital
twin
is
not
the
tool
for
everything,
because
if
we
want
to
test
performance,
which
is
the
thing
that
we
care
about
delay
packet,
loss,
jitter
and
link
utilization,
that's
easy.
E
So
I
agree
with
you
that
liquidation
is
easy,
but
I
cannot
agree
with
your
other
statements.
I'm
sorry!
So
if
that's
the
beauty
of
neural
network,
it's
a
data-driven
approach.
So
if
your
real
net
hardware
device
has
bugs
that's
fine,
they
will
show
up
on
the
data
set
and
the
gnn
will
will
catch
up
those
backs
because
you
are
training
with
real
data
from
the
real
network.
E
How
complex
the
hardware
is
gnn
will
be
able
to
to
model
that.
H
H
And
you
mentioned
it's
expensive,
yeah,
that's
expensive,
yeah,
and
so
in
your
in
your
diagram,
you
are
showing
that
you
wanted
to
have
all
packets
all
packet
headers,
basically
right.
E
H
E
H
H
E
I
say
something
because
I
think
that
you,
you
put
a
very
good
question:
that's
why
we
make
such
a
big
effort
into
building
models
that
are
able
to
be
trained
on
small
networks
and
and
operate
in
very
large
networks,
because
what
we
believe
is
the
commercial
solution
to
the
problem
that
you're
putting
is
okay,
let's
say
I'm
a
vendor,
I
I
cannot
go
to
at
this
network
and
ask
them
okay.
I
need
a
data
set,
please.
E
I
need
a
link
failure,
because
I
need
to
see
a
link.
Failure
in
my
data
set
for
the
gnn
to
understand
what
happens
when
there
is
a
link
failure.
So
please
throw
away
that
link.
Please
congest
the
network,
because
I
need
a
data
set
with
a
congested
network,
because
the
gnn
needs
to
see
what
happens
when
driver
is
congested.
Okay,
of
course,
this
is
crazy
right.
So
what
we
think
it
makes
sense,
but
I'm
hoping
for
discussion
is:
let's
build
a
data
set.
Sorry,
I'm
a
vendor.
E
I
built
a
small
test
bed,
you
know
and
I
generate
a
training
set
there
like
super
complex
training
set.
I
can
in
my
test
bed
at
the
vendor
lab
I
can
throw
links,
I
can
congest
the
network
and
then,
if
the
general
is
properly
trained-
and
it
has
this
nicest
gravity
property
which
gnns
have.
I
can
use
the
train
model
to
operate
in
networks
which
are
larger
than
the
test
bed.
E
So
and
then
you
have
like
in
the
self-driving
car
model
right
when,
when
we
expect
to
buy
a
self-driving
car,
they
give
they
give
you
the
the
car
has
been
already
trained.
Where
has
been
trained?
Well
in
you
know,
towns
which
are,
I
don't
know
in
nevada
and
they
are
facilities
at
the
at
the
see.
My
point:
that's
what
we
envision.
H
That
that's
what
works
for
single
domain,
single
vendor
type
of
environment,
I'm
wondering
you
know.
I
like
your
conclusions.
What
I
would
need.
I
know
it's
a
difficult
question,
but
getting
all
data
sets
it's
like
the
starting
point
for
uni
data
scientists
and
it's
difficult
yeah.
E
H
Would
be
good
is
what
kind
of
accuracy
if
you
get
like
flows
and
flows
with
sample
data,
and
if
you
get
a
little
bit
of
delay,
but
not
everything
and
trying
to
see
how
far
we
could.
You
know
get
into
the
good
and
fast,
but
with
some
data
yeah-
and
this
is
where
I'm
trying
to
to
match
the
two,
because
right
now,
with
the
small
amount
of
data
we
could
get
from
networks
today,
that's
a
fact
except
flows,
right
yeah,
then
we're
stuck
to.
Let
me
try
it
in
the
real
world
yeah.
E
Actually
how
much
did
I
I
do
need
from
the
test
bed
and
how
accurate
will
be,
depending
on
the
data
and
and
because
you
are
making
also
a
very
valid
point.
There
is
a
there
is
a
new
trend
in
ai,
which
is
called,
which
is
coined
by
the
stamper
professor
andrew
ng,
which
is
called
data
centric
ai,
and
what
what
he
says
is
look
it's
not
about
the
algorithms
anymore.
We
have
the
algorithms
it's
about
the
data,
okay,
but
it's
not
about
yeah.
I
have
billions
of
data.
E
If
which
is
the
minimum
set
of
data,
you
need
to
train
a
model,
and
that
is
called
the
data
center
ki,
and
actually
we
have
a
challenge
on
the
I
will.
I
will
send,
send
it
to
the
mailing
list.
We
have
a
challenge
on
precisely
what
you
said:
it's
data
centric
ai
for
network
digital
twin,
so
which
is
the
minimum
set
of
data
you
need
to
generate
from
a
network
in
order
to
have
this
fantastic
accuracy
and
that's
a
question
we
don't
know
yet.
B
Okay,
so
we
were
leading
a
bit
over
schedule.
We
have
two
two
question:
panning.
We
have
a
comment
in
the
chat
from
zaid,
so
I
will
bring
it
to
to
the
audio
here
so
training,
a
neural
network
to
recognize
the
cat
is
dealing
with
data
sets.
Network
traffic
pattern
is
not
finite.
How
will
this
solution
deal
with
that.
B
E
So
they
are
not
finished,
but
they
can
be
characterized
by
features
and
the
features
they
they
have
values
from
zero
to
one,
let's
say,
and
I
agree
they
are
infinite
numbers
between
zero
to
one
but
neural
networks.
What
they
do
is,
if
you
show
them
the
feature
at
value:
zero,
20,
0.5,
0,
75
and
1,
it
will
interpolate
what
happens
in
the
middle.
B
Yeah,
I
think
it
also
relates
to
the
question
from
lewis
that
if
you
have
new
mechanism
or
protocols,
you
will
need
to
retrain.
E
B
Okay,
so
I
I
have
one
question
and
I
think
we
have
chain
in
the
queue
we
need
to
to
speed
up
a
bit.
I
have
many
questions,
but
I
will
just
use
one.
You
mentioned
that
you
you
found
that
you
were
training
over
a
limited
set
of
notes
and
you
are
trying
to
say,
okay,
how
it
applies
to
the
to
to
network
that
are
bigger.
B
My
question
is
that
have
you
tried
to
assess
the
sensitivity
of
the
neural
network
if
there
are
changes
in
the
in
the
properties
of
the
topology?
You
see,
for
instance,
if
you
have
a
change
in
connectivity
degree
or
you
go
to
mesh
to
rings
or
different
types
of
the
topology
changes.
Do
you
have
you
seen
sensitivity
to
that
yeah.
E
Very
good
question
also,
so
this
is
not
our
work
again.
This
is
a
standard
gnns
they
what
the
the
mask,
what
they
tell
you
is
that
gen
end
will
be
able
to
provide
good
accuracy
as
long
as
the
distribution
of
the
graph
that
they
have
seen
in
training
is
similar
to
the
distribution
of
the
graph
that
they
have
they
seen
during
inference
in
operation,
which
means
that
if
you
want
to
have
good
accuracy
in
ring
topologies,
you
need
to
include
ring
topologies
into
your
data
set
into
your
training
set.
E
I
Oh
yes,
so
before
I
start
my
presentation,
can
I
ask
one
more
question?
Yes,
please,
okay,
this
question
is
also
about
data
set.
So
can
I
know
the
the
skill
or
how
large
the
data
set
is
in
your
geo
and
test
and.
E
E
So
it's
very
large
and-
and
that's
why
we
have
this
challenge-
that
we
organize,
along
with
i2t,
which
is
okay,
which
is
the
minimum
data
set
that
you
can
produce
to
have
a
fantastic
accuracy.
I
Okay,
since
your
your
model
includes
the
parameter
of
the
configuration,
so
whenever
a
specific
parameter
is
changed,
you
need
to
rechain
the
model.
Is
that
correct.
E
B
Okay,
so
jen,
you
can
start
your
presentation,
please.
I
I
Okay,
our
scope
of
the
draft
includes
to
present
an
overview
of
the
concept
of
digital
team
network
and
to
provide
the
basic
definitions
and
the
schedule
reference
architecture
and
to
either
identify
use
cases
and
the
discussion,
benefits
and
key
challenges
of
the
technology.
The
objectives
of
the
draft
includes
to
promote
the
widely
adopted
digital
training
concepts
and
to
establish
a
reference
that
architecture
and
to
identify
identify
future
technical
research
directions
on
enabling
technologies
next
slide.
Please.
I
Okay,
after
several
objects
and
the
dictators
in
group,
the
draft
code
for
adoption
in
december
2022
and
the
chapter
was
adopted
by
rgb
circle
and
in
the
review
cycle
we
received
more
than
50
valuable
comments,
and
here
we
must
say
thanks
to
our
major
experts,
to
help
improve
the
draft.
There
are
daniel
chufang,
lauren
jeremy,
judy
lewis,
alex.
I
I
Okay,
this
table
shows
a
summary
on
the
comments
we
received
and
the
actions
we've
taken
we
have
taken.
We
have
addressed
most
comments
while
either
explaining
in
the
mailing
list
or
revising
the
new
version
time
limited.
So
I
will
not
describe
them
in
detail
next
slide,
please,
okay,
this
slide
shows
the
major
changes
we
made
in
the
in
the
new
version
and
include
better
better
structure.
The
content,
strengthen
the
research
background
and
more
focus
on
the
challenges
and
close
some
old
issues
and
focus
on
some
new
future
research
productions.
I
Okay
for
the
radiation,
all
the
issues
we
this
table
shows
the
five
and
five
of
them,
and
one
is
the
new
section
of
the
new
technology
be
added.
A
second
is
a
recommendation
on
to
describe
recent
rtf
rtf
technology.
The
third
is
to
go
deep
into
one
or
two
use
case,
and
the
first
is
to
study
I
mentioned
earlier.
I
Technique
is
related
to
the
digital
network,
and
the
five
is
the
fifth
is,
which
level
of
detail
should
the
document
should
include
with
without
losing
its
purpose,
especially
for
challenging
and
enabling
technology
sections
and
of
all
the
issues
we
have
opened
in
the
mailing
list
for
comments.
Let's
please.
I
Okay
from
this
slides
on,
I
will
take
time
to
to
to
discuss
some
open
item
items
regarding
the
detail
network.
They
are
motivation,
challenges,
architecture,
enabling
technology
and
the
research
directions
respectively.
Next,
please.
I
Firstly,
what's
the
motivations
and
the
requirements
of
for
dt
network,
we
summarized
the
four
challenges
in
network
operation
and
maintenance.
Firstly,
a
new
new
new
net
network
services
emerging
endlessly,
and
the
network
scale
continues
to
expand.
Sadly,
the
complexity
of
network
om
is
becoming
higher.
Thirdly,
in
inertial
technology
in
longer
time
to
deploy
and,
firstly,
network
optimizing.
Optimization,
has
a
high
cost
and
high
risk
due
to
vulnerable
production
environment
and
to
address
these
challenges
we
can
see.
I
No
new
network
of
automation
and
autonomous
operations
are
becoming
a
new
region
and
recently
ibm
intend-based
vlan
intern
based
networking
ultra
driving
network
network
adm,
zero
touchdown
has
been
studied
and
we
can
also
see
ai
machine
learning
technology
that
are
widely
used
in
network
field
to
help
achieve
this
region
and
teach.
Now
we
will
digit
here.
We
can
see
that
it
brings
a
new
chance
to
meet
these
challenges.
I
I
Then
what
are
challenges
to
build
dtt
network
first,
according
to
a
sighted
paper
here,
the
main
challenge
is
to
build
and
maintain
digital
twins
in
industrial
field
can
be
summarized
as
five
aspects.
They
are:
data
acquisition
and
processing,
high
fidelity,
modeling,
real-time
two-way
connection
between
the
russia
and
the
real
trees,
unified
the
development
platform
and
towards
environmental
coupling
technologies,
and
the
net
network
field
has
its
own
characteristic
characteristics
such
as
a
higher
level
of
digitalization
nation,
multiple
services
and
complex
system.
I
So
we
we
think
that
I
will
summarize
here
are
five
challenges:
to
build
a
dt
network.
There
are
large
skills,
challenges,
interoperability,
data,
modeling
difficulties,
real-time
requirements
and
characteristics.
Of
course
I
will
here
we
have
to
work
on
any
other
input
on
challenges
if
any
next
piece.
I
Oh,
let's
go
quick
quickly
go
through
the
reference
architecture
we
recommended
in
the
dropped
in
the
three
layer
architecture,
the
top
layer,
lowest
layer,
is
physical
network.
There
is
network
application,
the
intermediate
layer
is
a
natural
digital
twin,
which
is
called
part
of
the
system
system,
and
an
optional
sub
layer
was
added,
can
be
added
for
data
collection
and
change
control
functionalities.
I
Based
on
the
reference
architecture
we
plug
in
the
above,
we
recommend
the
file
in
enabling
technology
to
build
the
digital
system.
First
is
data
collection,
including
diverse,
exist,
existing
tools,
for
example,
smg
network
telemetry.
I
Narration
a
innovative
new
tools,
for
example,
sketch
based
measurement
and
semantic
aggregation
mechanism
for
data
integration
and
the
action
translation.
The
the
second
technology
is
for
data
storage
and
the
services
as
an
editor
of
iit
technologies,
and
the
third
is
network
modeling,
which
is
the
the
most
important
one
and
for
small
skill
network.
We
think
that
net
network
simulators,
for
example,
s2
g
and
s3
or
virtualized
source,
can
be
up.
I
It
can
be
an
option
and
for
a
large-scale
network,
local
solutions
normally
based
on
formal
methods
of
mathematical
measures,
including
what
what
judy
has
just
described
the
for
gin
queues,
currencies,
neural
network
yeah,
and
we
think
that
a
machine
learning
ai
can
be
used
to
build
complex
function,
models
into
an
entity
and
the
the
first
is
a
visualization.
I
It
is
to
display
the
net
network
topology
operational
status
and
also
to
to,
inter
inter
interactive
visualizing,
to
show
better
understanding
and
help,
reduce
and
explore
the
network.
The
interfaces
and
protocols
for
users.
I
I
Oh,
this
slide
shows
a
test
study
on
the
data
collection.
There
is
an
efficient,
efficient
data,
flashing
method
for
digital
network,
and
we
know
current
collection
method.
I
just
collect
raw
and
four
data
from
physical
network
and
have
problems
of
time,
cost
insufficient
storage,
resource,
low,
computational
efficiency
and
waste
of
bandwidth,
and
this
this
struct
or
this
method
proposes
an
efficient
and
lightweight
data
collection,
aggregation
and
correlation
method.
I
For
more
details,
we
can
see
the
the
draft
we
uploaded
in
the
emergency
data
chapter
next.
Please.
I
This
lecture
is
another
case
study
on
network
modeling.
It
is
a
knowledge
graph
based
construct
method
for
digital
team
network.
I
In
this
solution,
network
system
design
referred
to
refer
to
the
architecture
in
this
structure,
and
the
base
mode
in
in
the
in
the
solution
are
built
via
formal
methods,
network
device
models
are
built
based
on
the
six
circle.
Ontology
and
topology
models
are
built
using
a
knowledge
graph
and
a
function
based
on
that
function.
Models
can
be
built
using
ai
and
machining
algorithms
for
more
details.
You
can
find
it
in
our
short
paper
next,
please.
I
Okay,
here
we
we
want
to
propose
some
future
candidates.
Research
directories-
some
of
them-
are
not
limited
to
these
jobs.
I
First,
we
think
that
we
need
to
go
deeper
to
to
for
the
five
key
technologies
listed
above
next
is
that
we
should
get
more
measurements
to
quantify
the
gain
brought
by
dtn
to
network
management,
and
next
are
how
ai
machine
learning
rdl
algorithm
used
for
network
modeling
and
how
can
knowledge
be
injected
to
network
digital
training
to
help
pursue
vision
of
autonomous
network
and
how
can
the
digital
chain
network
integrate
and
evolve
with
legacy
network
management
management
system?
I
And,
finally,
finally,
we
need
to
define
capability
levels
and
evaluation
methods
for
dt
network
voices,
the
resource
requirement
and
the
effectiveness,
evaluation
of
a
dtt
system
and
the
metrics
and
the
measurements
to
evaluate
the
accuracy
and
fidelity
of
our
geodet.
Yes,
please
going
forward.
We
are
to
issue
stress
for
each
of
the
candidates,
research
directions
and
to
record
out
then
to
record
the
outcome
of
this
class
as
appropriate,
and
we
also
overcome
proposed
proposals
to
enhance
the
document,
and
your
comments
are
always
welcome.
That's
all.
Thank
you.
B
Thank
you,
shane
for
being
efficient
in
the
presentation.
We
have
maybe
the
opportunity
to
have
information
on
the
updates
and
the
directions
set
by
the
offer
of
this
draft
activity.
B
So,
if
not
thanks,
jane-
let's,
let's
use
a
bit
of
time
now
to
continue
on
this
topic
of
network
digital
tweet.
I
think
we
had
two
good
presentation
in
already
in
the
in
the
discussion
with
albert
and
the
question,
we
I
think
we
started
to
touch
on
a
few
good
points.
So
what
we
wanted
to
also
is
again
in
this
notion
of
a
research
activity,
exploring
a
new
new
aspect.
B
The
idea
was
to
really
for
the
research
group
and
in
general
it
can
be
beyond
energy,
but
to
try
to
to
to
think
more
what
what
it
means
network
digital
twin.
What
are
the
research
directions,
research
challenges
that
appears
and
so
jerome
and
myself
we
we
drafted
the
first
set
of
questions.
Of
course
this
is
open
to
anyone
in
the
group
to
add
questions
and,
of
course,
to
provide
your
views
and
opinions
on
the
already
existing
questions
so
jerome.
I
would
like
to
proceed.
Do
we
go
through
the
questions?
B
C
I
think,
let's
open
the
people
if
they
want
to
give
something
back.
I
think
we're
gonna
try
to
go
each
question
by
questions.
I
think
it's
you
can
go
as
as
people
want
so
there's
a
question
if
they
want
to
comment
some
of
them
and
so.
C
Yes,
jerome,
please,
like
I
can
start,
maybe
so
I
think
we
have.
C
We
had
a
lot
of
already
some
some
discussion
before
with
albert
talk
on
on
this
point,
but
regarding,
although
yeah
the
accuracy
of
the
model
that
you
want
to
use
for
digital
twin
and
the
data
that
you
have
as
input
and
but
my
question
is
that
okay,
I
want
to
evaluate,
I
want
a
digital
twin,
to
evaluate,
for
example,
preference,
metric
or
write
over
matrix,
but
the
the
interpretation
of
this
metric
and
the
let's
say,
usefulness
of
this
metric
may
really
depend
on
on
the
on
the
and,
let's
say,
maybe
scenario
of
application.
C
For
example,
if
you
have
preference
metering
that
is
about
the
latency,
maybe
you
you
don't
want
it,
knowing
exactly
what
would
be
the
the
latency
from
purely
metric
point
of
view,
but
more
on
the
application
that
you
may
have
at
the
end,
the
latency
will
be
used,
maybe
to
to
add
resources
or
whatever
and
so
on.
C
And
my
question
is:
is
a
digital
twin
because
we
have
to
retrace
digital
tree
for
each
when
you
have
new,
let's
say
maybe
type
of
data
feature
as
albert
say,
using
gen,
and
I
mean
here.
But
if
you
want
to
use
for
like
this
kind
of
approach,
we
can
sure
we
can
ensure
that
we
have
enough
cases
to
really
have
a
representative
model
for
digital
twin
because,
okay,
you
can
evaluate
accuracy,
but
this
may
be
very
focused
on
the
kind
of
metric
or
accuracy
or
use
keys.
C
But
if
you
want
to
promote
the
gita
twin
to
be
used
for
more
from
to
actually
to
ask
more
questions,
even
for
the
same
kind
of
performance
or
security
or
whatever
same
kind
of
question,
but
will
it
depend
on
where
it
will
be
used?
C
C
I
don't
think
we
are
representing
different
models
with
different,
let's
say,
advantages
and
drawbacks,
and
with
that
the
case
that
the
models
that
we
need
somehow,
in
my
opinion,
it's
not
that
we
should
need
one
single
models.
Maybe
we
need
to
some
couple
different
models
as
well,
and
this
is
my-
is
that
not
there
is
not
single,
let's
say,
model
that
will
fit
holes
that
for
sure,
even
if,
in
a
simple
use
case,
I
think
we
we
have
to
think
that
models
could
be
combined
and
yeah.
C
That's
that's
really
my
point
and
yeah.
We.
We
also
assume
that
we
can
have
data.
I
think
some
things
that
have
been
said
we
can
have
data-
and
I
think,
which
is
very
interesting
here
also
in
the
presentation
regarding
the
architecture-
is
that
even
if
you
know
that
we
have
data
that
could
be
used
is
all
we
can
ensure
that
we
collect
the
right
data
at
the
right
time
to
basically
evaluate
the
output
of
any
digital
twin.
I
think
this
is
really
important
as
well.
B
And
that's
all
for
my
first
comment.
Actually,
okay,
thank
you.
Let's
try
to
have
it
as
a
let's
say,
as
a
group
discussion,
I
see
that
albert
is
also
waiting
on
the
mic.
E
You
want
yeah,
okay
thanks.
This
is
albert
cabellos,
so
I
I
I
feel
that
we
have
sort
of
a
chicken
and
a
problem,
chicken
and
egg
problem,
meaning
that
somehow
we
try
to
be
very
general
and
we
we
try
not
to
be
narrowed
down
by
a
particular
use
case
or
or
by
a
particular
model,
and
I
understand
why,
because
this
is
a
research
group
and
that's
the
goal
of
the
research
group.
E
All
the
answers
are
at
the
end
it
depends,
and
so
on
now,
when,
when
we
discuss
a
specific
network,
digital
twin
and
and
again,
I'm
not
saying
that
the
one
I
put
in
the
slides
is
the
right
one.
I
don't
I
don't.
E
I
don't
know
enough
about
networks,
I
have
never
run
a
network
and
I
have
never
built
a
network,
so
I
don't
know,
but
what
I
know
is
that
when
we
say
okay,
that's
the
input
and
that's
the
output,
then
all
these
questions
that
I
believe
are
relevant
can
be
answered
in
a
very
specific
way
and
very
concrete
way
right
and
then
suddenly
very
specific
problems
show
up
like
okay.
Yes,
you
can
build
a
performance
network.
Digital
twin,
yes,
can
a
vendor,
do
it?
E
Yes,
but
then
it
will
be
single
vendor
right,
because
the
vendor
will
not
train
using
equipment
from
other
vendors
and
that's
a
real
issue
and
and
I'm
into
that
chicken
neck
problem
right.
So
when
I
try
to
discuss
it
at
a
very
in
a
very
abstract
way,
I'm
lost
when
I
make
it
too
specific,
then
I
understand
that
I'm
being
too
specific,
maybe
a
solution
to
this
is
discussing
okay,
what
are
the
inputs
and
the
outputs?
E
Let's
start
by
this,
if
we
agree
that
that's
a
box,
there
are
inputs
and
outputs,
let's,
let's
try
to
discuss
which
are
the
relevant
ones
and
which
use
cases
enable
different
inputs
and
outputs.
And
what
are
the
challenges
when
you
start
considering
different
inputs
and
outputs
and
I'm
pretty
certain
that
there
are
a
particular
set
of
inputs
and
output,
for
which
maybe
emulation
is
better
than
neural
networks,
or
maybe
a
simulator
or
maybe
a
queueing
theory.
I
don't
know,
but
it's
very
hard
to
discuss
this
without
a
specific
goal
in
mind.
E
E
D
D
I
do
believe
it
is
important
that
we
understand
all
the
business,
including
different
intents
for
different
customers,
some
security
policies
in
the
organization,
any
kind
of
other,
very
high
level
policies
that
are
not
really
configuration
things.
So
I
think
that
is
one
of
the
most
important
thing:
how
to
connect
the
business
intents
with
the
digital
twin,
in
order
to
be
able
to
kind
of
do
self,
optimization
self-correction
in
the
right
way.
B
G
There's
a
comment
on
in
the
same
line
that
what
I
was
discussing
with
albert
before
my
from
our
experience.
We
have
been
trying
to
run.
Well,
probably
you
could
not
call
them
digital
twins,
but
something
that
was
able
to
generate
realistic
data
sets
for
for
training,
network
control
systems,
and
now
our
experience
is
that
there
are
two
things
that
are
essential.
One
is
is
about
repeatability.
G
You
have
to
have
a
strong
control
of
the
digital
train,
so
you
know
what
you
are
deploying
every
time
and
you
know
where
you
are
putting
the
focus.
What
that
you
are
interested
in
collecting,
because
a
network
is
a
it's
a
one,
it's
not
infinite,
but
this
is
extremely
wide
and
I'm
certainly
a
manageable
field
of
different
parameters
and
configurations,
etc.
G
But
so
you
have
to
have
a
very
clear
view
of
what
you
what
you
want
and
which,
which
is
your
focus.
So
this
is
about
repeatability
and
control.
So
you
can
repeat,
the
important
thing
of
the
digital
training
is
that
you
can
execute
several
conditions
with
the
control
variations,
and
so
you
can
derive
some
insights
or
knowledge
from
whatever.
G
This
is
one
thing.
The
other
thing
is
precisely
the
fact.
Let
me
say
is
that,
given
the
size
of
the
network
and
the
different
technologies
involved,
etc,
believing
that
we
are
going
to
have
something
that
is
a
full
twing
of
the
network
is
jerome
has
said
is
about
the
the
close
coupling
between
the
real
system
and
the
and
the
model.
G
Probably
we
had
to
choose,
depending
on
the
on
the
case
and
the
focus,
we
will
have
to
choose
to
different
levels
of
of
us,
of
abstraction,
of
different
parts
of
the
network,
to
put
focus
on,
let's
say
on
security
aspects,
on
delays
induced
by
radio
conditions
or
in
congestion
in
some
points,
etcetera,
and
this
is
important
as
well.
G
So
I
believe
that
those
are
the
these
two
basic
characteristics
repeat:
repeatability
based
on
control
and
the
the
possibility
of
applying
different
levels
of
attra,
of
abstraction,
depending
on
the
the
goals
of
the
of
the
twin
runs
so
to
say,
are
essential.
B
Thank
you
diego,
so
we
capture
that
in
a
minute.
My
proposal
is
that
we
have
several
questions
and
I
think
quite
quite
important,
and
that
will
require,
let's
say
iteration
from
the
group
participants
about
collecting
your
opinions,
collecting
references
inputs
to
to
this
to
to
sustain
our
reflection
on
that.
So
our
proposal
will
be
to
continue
to
to
develop
the
I
mean,
collect
answers
to
those
questions
and
a
bit
structure.
B
Let's
say
a
research,
a
set
of
research
questions
on
this
as
an
activity
for
the
group
before
before
I
mean
understanding
a
bit
better,
what
we
want,
what
we
want
or
what
we
could
do
in
in
the
in
the
research
group
and
even
if
it's
beyond
the
research
group,
how
to
appreciate
so
so.
Thank
you
to
all
the
participants-
and
I
mean
presenters
people
at
comments
for
this
topic
of
network
digital
twin.
This
is
not
the
end
of
the
story,
but
just
the
beginning.
B
F
So,
thank
you
very
much
lauren.
Yes,
I
will
present
this
idea
of
plus
evolution.
The
next
slide,
please
so
a
bit
of
background
class
states
by
creating
linear
architecture
for
sort
with
defined
networking.
This
was
a
work
that
was
previously
adopted
in
the
former
sdn
research
group
that
was
moved
to
a
independent
submission
stream.
F
After
the
dismantling
of
the
sdnrg.
It
was
finally
released
three
years
ago
as
an
nfc,
so
it's
publicly
available
and
the
main
proposition
of
this
architecture
was
essentially
to
decouple
the
the
control
of
the
services
from
the
control
of
the
transport
network,
so
allowing
them
to
evolve
independently
and
but
even
despite
of
being
separated
to.
We
propose
a
tighter
integration
of
both
the
strata
in
such
a
way
that
cooperates
a
program
in
a
programmability
manner.
F
Next,
please
next
slide,
okay,
so
as
an
overview,
so
we
defined
the
two
different
strata,
the
services
stratum
and
the
transport
stratum.
The
purpose
is
clear,
so
the
service
essentially
to
program
the
service
and
the
capabilities
of
of
the
service
itself
and
in
the
case
of
the
transport
stratum.
So
all
the
functions
related
to
connectivity
to
the
delivery
of
information
between
different
components
of
the
of
the
service
in
each
of
the
strata
we
define
into
three
different
planes,
control,
plane,
management,
plane
and
resource
plane.
F
F
So
we
have
seen
here
in
in
the
itf
this
week,
initiatives
like
the
both
for
the
computer,
we're
networking,
also
another
working
in
outer
working
group
in
this
respect,
and
so
so
more
and
more
operators
are
deploying
these
compute
capabilities,
but
also
are
integrated
with
compute
capabilities
that
are,
com
are
provided
by
hyperscalers
by
external
players
in
such
a
way
that
networks
are
somehow
transforming
as
a
kind
of
fabric
in
their
connecting
compute
environments.
F
Are
started
to
be
complemented
with
ii
ml
techniques
as
clearly
we
know
this
from
the
work
in
nmrg,
so
also
we've
seen
the
the
need
of
intercooperating
these
capabilities
into
the
in
an
operational
manner
to
the
existing
network.
So
the
focus
of
this
evolution
and
bringing
these
ideas
here
will
be
on
management
and
control.
So
we
are
not
dealing
with
aspects
about
service
placement
and
this
kind
of
things.
So
we
were,
we
want
to
focus
on
negan
management
and
management
and
control.
F
F
So
thinking
on
how
to
forward
the
the
formation
between
the
different
parties,
we
introduce
a
new
stratum
referred
as
computer
stratum
again
incorporating
in
this
stratum
the
resource
management
and
control
planes,
and
we
define
a
new
plane
applicable
for
all
the
three
strata
which
we
call
a
learning
plane
and
essentially
would
be
devoted
to
the
handling
of
data
that
could
help
on
the
operation
liberating
on
the
imml
techniques
that
we
will
comment
later
on.
So
next,
please.
F
So
the
idea
would
be
for
the
computer
strategy
essentially
to
consider
all
that
distributed
computing
capabilities
and
this
what,
as
mentioned,
would
contain
the
the
control
management
and
resource
planning
related
to
the
computing
part,
some
ideas
that
were
probably
good
leverage
on.
Probably
there
is
some
previous
work
in
alto,
as
mentioned
before,
but
also
the
the
some
ideas
of
integrating
compute
environments
at
the
end
and
then
for
the
leaning
plane.
F
This
would
be
responsible
of
collecting
processing
and
sharing
relevant
data
for
from
the
for
each
of
the
the
strata
for
the
service
for
the
connectivity
and
for
the
compute,
and
the
idea
would
be
to
leverage
on
iiml
techniques,
and
maybe
a
some
interesting
framework
to
follow
would
be
the
one
in
mnrg
as
well
about
artificial
intelligence
and
also
maybe
considering
other
outcomes
from
itf,
as
would
be
the
the
service
assurance
models
and
so
for
feeding
all
this
stuff
and
helping
on
the
operation
of
of
the
network.
F
So
next,
please
so
exploitation
research
directions
that
we
do
for
c.
There
were
some
of
them
in
the
draft,
actually
the
ones
that
in
the
first
ballot,
so
some
work
could
be.
Maybe
about
a
communication
means
or
interfaces
between
the
strata
and
the
planes.
The
preliminary
scenarios,
including
legacy
ones
so
understand
how
this
evolved
architecture
could
be,
could
fit
potential
use
cases
or
the
link
with
ongoing
activities
in
an
mrg.
Clearly
the
intent-based
activities,
so
that
difficult
intelligence
could
have
some
some
fitting
here
as
well.
We
we
think
that
would
be
the
case.
F
Additional
research
lines,
maybe
could
be
explored
normal
architectural
approaches,
potentially
maybe
as
an
example.
The
boost
architecture,
like
the
service
based
architecture
in
3dbp
or
what
is
what
is
called
cloud-based
architecture,
could
be
probably
thing
a
ways
of
evolving
architecture,
essentially
how
we
in
communicate
between
the
planes,
a
potential
line
as
well
would
be
in
the
domain
apis
between
the
different
or
strata
or
even
in
the
within
a
single
stratum.
So
further
developing
the
ideas
that
we
already
some
of
them.
F
We
expose
in
a
previous
draft
also
in
nmrg,
exploring
the
base
apis
or
approaches
for
the
learning
plane,
specifically
so
how
we
could
activate
or
consume
the
information
that
could
come
from
the
learning
plan
and
maybe
also
even
working
on
data
models
or
even
ontologies,
for
the
change
for
exchanging
and
aggregating
information
that
could
relevant
for
the
operation
of
the
of
the
system
of
the
network
at
the
end.
So
next,
please.
F
F
F
I
met
by
carlos
sandiego
that
was
publicly
expressed
in
the
mailing
list,
but
also
some
informal
feedback
from
pedranjin
that
was
given
offline
and
the
idea
is
yeah
with
all
that
we
could
collect
and
the
the
indications
from
the
chairs
to
prepare
new
and
more
detailed
versions
for
next
itf
and
and
basically
understand
if
this
could
fit
here
or
or
or
not
at
all.
So
thanks
so
much.
B
B
I
see
that
there
is
a
relationship
with
the
topic
investigating
in
the
core
in
research
group,
so
I'm
not
saying
that
it
should
be
handled
only
in
the
coin
research
group,
but
because
you
put
what
what
is
the
scope
of
energy
with
respect
to
this
work,
but
in
your
let's
say
in
your
iteration
of
on
this
work,
please
consider
to
try
to
laze
with
activities
in
coin,
to
understand
also
what
they
could
bring
to
to
this
discussion
and
what
will
fit
more
into
the
scope
of
coin
versus
what
could
be
the
specific
scope
to
be
addressed
in
energy?
B
You
see
it's
it's!
It's
really
like.
Imagine
that
this
is
a.
This
is
a
central
topic
for
you.
What
an
energy
could
be
the
good
place
to
do
something
and
what
could
be
maybe
better
handled
in
coin
or
elsewhere,
but
just
as
a
question
of
course,.
F
B
So
we
have
to
to
conclude
the
meeting.
Thank
you,
everyone
for
your
participation.
I
think
it
was.
I
mean
I
appreciated
the
the
discussion
we
had
today,
especially
the
topic
on
network
digital
trend
that
was
more
central
to
to
our
agenda
today.
So
we
tried
to
concentrate
on
such
topics
in
the
future.
We
have
a
lot
a
lot
on
the
agenda.
I
mean
all
the
the
work
stream
of
energy,
so
please
continue
the
interactions
I
really
miss
not
being
in
vienna,
but
I
see
people
in
the
room.
B
So
please
continue
the
discussion.
There
continue
the
discussion
on
the
mailing
list
and
to
all
offers.
Thank
you
for
your
investigative
investment
in
the
in
the
work,
and
I
see
that
we
have
a
lot
of
future
version
of
of
the
draft
to
be
flowing
in
the
research
group.
So
thank
you.
Everyone
jerome!
If
you
have
any
last
word
of
conclusions,
the
floor
is
here
just.
C
Want
to
say
thank
you
also
around.
Thank
you
lauren.
So
thank
you,
diego
for
local
backup
and,
yes,
how
to
make
you
a
so
our
close
future
in
real
real
person.
Yeah.
Thank
you
all
and
have
a
nice
day.