►
From YouTube: IETF114 NMRG 20220727 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
For
being
here,
we
have
a
short
introduction
before
starting
the
actual
meeting,
so
bear
with
me
for
for
the
following
slide:
they're
not
well,
so
I
think
most
of
you
are
too
well
aware
of
that.
But
this
is
the
rtf
follows
the
etf
intellectual
property,
right
disclosure
rules
and
so
by
participating
in
the
iota.
A
It
is
not
a
standards,
development
organization,
the
internet,
research
task
focuses
on
long-term
research
issues
related
to
the
internet,
while
the
parallel
organization,
the
ietf,
focuses
on
short-term
issues
of
engineering
and
standards
making
and
while
the
iltf
can
publish
informational
and
experimental
documents
in
the
rfc
series,
its
primary
goal
is
to
promote
the
development
of
research
collaboration,
teamwork
in
exploring
research
issues
related
to
internet
protocols,
application
architecture
and
technologies,
and
you
can
get
more
information
about
the
goals
and
missions
of
the
ietf
in
the
an
rhf
primer
for
ietf
participants,
never
seen
the
online
meeting
etiquette,
so
this
session
is
being
recorded.
A
A
So
for
today
we
have
a
very
dense
agenda.
We
have
two
hours
meeting,
but
we
have
a
lot
of
topics
to
cover,
so
our
first
request
will
be
for
presenters.
Please
stick
to
the
time
allocated,
not
more.
We
would
like
to
have
time
for
question
answers
so
bear
in
mind
also,
when
you're
presenting
to
keep
time
in
your
presentation
to
address
questions
that
you
will
have
so
do
not
use
all
your
slot
just
for
presenting
your
slides.
So
we
will
start
with
this
current
introduction
and
research
group
status.
A
A
I'm
not
go
through
the
details,
because
we
want
to
save
a
bit
of
time
here,
just
a
quick
status
on
some
activities
of
the
research
group.
So
we
have
research
group
documents
in
the
pipe.
We
have
two
documents
that
are
raised:
the
rfc
editor
currently
in
the
edit
stage,
so
the
editor
urs
editor
is
looking
into
those
documents
and
we
expect
to
get
some
feedback
soon.
A
On
the
next
step,
we
have
one
active
research
group
document,
digital
tree
network
concept
and
reference
architecture,
and
we
will
have
an
update
on
the
progress
of
this
draft
in
this
session,
and
there
are
also
another
document-
network
measurement
intent,
one
of
ibn
use
cases
that
has
been
in
a
first
call
for
adoption,
and
we
ask
for
revision,
addressing
some
comments
received
to
make
a
second
call
for
adoption,
and
we
will
also
have
updates
on
this
document
in
this
session,
an
outlook
for
future
meeting.
But
this
is
a
bit
business
as
usual.
A
We
will
organize
some
interim
meetings
or
follow-up
discussion
on
the
different
topics.
We
have
one
in
the
pipe
that
we
try
to
organize
for
some
time
on
the
design,
deployment
and
operation
of
distributed
ai.
We
have
also
some
presentation
today
about
this
topic,
but
we
would
like
to
offer
a
laser
platform
to
address
this
topic
in
a
dedicated
interview
or
collocate
with
a
research
conference.
A
We
have
a
network
digital
twin
site
meeting
this
evening
in
the
philadelphia
south
room,
so
just
sign
meeting
and
it's
not
an
official
meeting,
no
agenda.
We
can
also
organize
fallout
discussion
on
ibm
use
cases
so
also
we
are
open
to
receive
your
inputs
for
for
further
meetings.
A
A
C
C
C
I
need
to
break
the
protocol
a
little
bit.
I
don't
know
if
you'll
see
that
there
are
no
so
many
brazilian
years
today,
so
I'm
not
here
by
any
support
of
the
brazilian
government,
I'm
here
just
for
the
support
of
the
future,
a
company
that
provided
the
award
at
the
brazilian
congress
of
the
competing
society
and
yeah
and
for
denuciate
or
denounce
the
neglect
that's
occurring
in
brazil
right
now
in
science
and
research
and
education.
C
So
my
presence
here
it's
not
due
to
the
with
the
support
of
the
brazilian
government
so
well,
we
started
talking
about
the
paper
entitled
using
the
rfc
7575
and
models
at
runtime
for
enabling
autonomic
network
in
sdn.
It
was
unnowarded
at
the
workshop
pre-ietf
last
year.
It
is
part
of
my
phd
thesis
that
I
defended
in
2020
by
the
university
federal
university
of
penabuco,
also
in
brazil,
and
I
just
present
part
of
it.
If
you
have
want
to
see
more
details,
we
can
discuss,
could
discuss
later
this
next
one.
C
Well
before
talking
about
the
paper
itself,
we
need
to
remember
or
to
introduce
you
about
the
concept
involved
in
it.
The
autonomic,
the
autonomic
network
management,
as
may
you
see,
may
you
know
we
have
traditional
management
in
traditional
network.
We
have
this
autonomous
loop
put
above
the
data
plane
where
you
need
to
define
the
business
goals.
We
need
to
define
the
knowledge
generation,
we
need
to
define
how
the
policy
process
will
occur
and
also
the
information
process.
C
The
thing
is,
how
can
we
translate
or
to
insert
all
of
these
components
all
of
this
post-processing,
the
the
business
objectives
and
stuff
in
the
data
plane
this
architecture?
Here
it's
not
new.
It's
the
4k
architecture
based
on
strasner,
it's
from
2006,
and
it
just
defines
how
this
autonomic
loop
can
be
coupled
with
the
data
plane.
C
Well,
when
we
consider
the
sdn
network,
the
sdn
architecture,
obviously
these
business
goals,
the
policies,
the
knowledge
generation,
the
policy
processing.
It
now
needs
to
communicate
through
the
control
plane
and
for
communicating
with
the
control
plane.
We
need
to
translate
these
business
goals
into
network
rules.
Excuse
me:
can
I
remove
my
mask.
C
Okay,
so
besides
translating
this
policy
processing
and
the
business
goes,
we
also
need
to
represent
the
context
from
the
network
from
the
control
plane
to
the
layer
above
to
the
autonomic
loop
above.
C
Okay
for
introducing
the
rfc
7575
for
those
who
are
not
aware
about
it,
it
just
defines
some
guidelines
and
it's
like
a
reference
model.
If
you
want
to
achieve
an
autonomic
management
in
networks,
not
in
sdn,
you
can
follow
these
guidelines
to
achieve
designing
goals
to
to
try
to
implement
your
autonomic
management
in
your
network,
so
it
defines
four
surface
star
properties
like
the
self
configuration
self
healing
self
protection
and
optimizing
besides.
It
also
defines
11
design
goals.
C
So,
okay,
we
have
the
concept
of
autonomic
networking.
We
have
the
rfc
guiding
anyone
who
wants
to
implement
them,
implement
it,
and
the
question
again
is
how
to
enable
autonomous
networking
in
sdn.
We
have
some
guidelines,
we
have
the
the
concept
of
autonomic
networking,
but
how
to
implement
it.
How
we
enjoy
this
two
areas
to
different
areas
together,
we
know
that
atomic
networking
is
not
a
new
area.
C
If
you
have
this
autonomic
software,
you
will
have
the
concept
named
from
the
software
engineer.
You
have
the
model
that
runtime
concept,
the
people
from
all
the
related
areas
provided
a
concept
that
may
help
us
to
to
think
about,
or
to
have
some,
not
so
new
ideas
but
related
ideas
to
how
to
implement
our
autonomic
networking
management,
we're
using
the
control
plane
as
software
okay.
C
Obviously,
the
iet
apps
community
also
made
these
several
contributions
into
the
autonomic
field,
excellent,
so
continually
continually
checking
or
observing
all
the
related
areas.
Like
the
software
engineering
community
community,
we
have
class
diagrams,
we
have
uml,
we
have
sequence
diagrams.
If
you
look
for
the
database
area,
we
have
the
entity
relationship
diagram.
We
have
uml
so
in
machine
learning
as
well.
We
also
have
models,
abstracting,
the
complexity
below
and
providing
some
high-level
models
for
the
users
and
developers
to
evolve.
C
C
So
we
now
need
to
introduce
the
main
concept
that
is
in
the
core
of
our
proposal.
That
is
the
models
at
runtime.
It
comes
from
the
software
engineering
community
and
it's
it
defines
that
any
system.
If
it's
a
network
or
a
control
plane
running
some
software,
it
defines
that
any
system
has
goals,
has
behavior
and
has
a
structure,
and
if
you
are
going
to
give
some
going
deep
into
each
of
each
of
these
parts
of
the
system,
you
need
to
visualize
that
it
needs
to
monitor.
You
need
to
execute
some
action.
C
C
So
we
have
the
concept:
how
to
visualize
any
system
to
provide
autonomic
behavior,
but
how
can
we
implement
it?
In
fact,
we
need
to,
according
to
asthma,
an
author
that
research
about
models
at
runtime.
We
need
to
define
metamodels
at
first,
because
it's
using
the
for
these
metamodels.
If
you
look
for
the
left
picture
of
some
yellow
boxes,
you
need
to
define
these
metamodels
to
provide
them
formalism
for
your
language.
For
your
modern
language,
you
need
to
define
how
you
can
associate
each
concept,
for
example,
objectives
and
actions
and
flows.
C
C
Besides
that,
we
also
need
code
templates.
We
need
to
have
this
code
templates
to
make
adaptation
in
the
execution
of
the
system.
You
cannot
just
provide
for
the
user,
some
interface,
some
interface
and
the
user
will
try
to
program
it
directly.
If
you
are
trying
to
model
objectives,
you
need
to
provide
to
him
some
abstraction,
and
also
we
have
monitoring
techniques
and
machine
learning.
Algorithms
combine
it
together
to
try
to
achieve
the
adaptations,
to
try
to
to
execute
or
achieve
the
modality
objectives
as
well.
D
C
In
our
proposal,
we
used
the
rfc
7575
as
a
north,
and
we
provided
this
three-layer
architecture,
which
is
which
is
composed
of
the
network
model
layer.
At
the
above,
we
have
the
adaptability
layer
in
the
middle
and
also,
and
obviously
we
are
using
the
sdn
architecture
at
the
infrastructure
layer,
the
network
model.
Here
we
have
five
models.
We
have
the
configuration
model
of
the
network,
we
have
the
capabilities
models
and
obsolete.
We
have
the
objectives
model
where
the
user
or
the
network
operator
can
define
his
objectives
in
a
high
abstraction
level.
C
Given
some
details
about
each
of
these
layers,
the
first
one,
the
network
model-
here
we
need
to
use
the
previously
mentioned
metamodels.
We
defined
these
metal
models.
For
example.
Here
we
are
having
at
the
at
the
middle,
the
metamodel
for
defining
the
objective
model,
so
we
formalize
how
you
can
define
the
objectives,
how
you
can
associate
these
objectives
with
actions
in
your
network,
how
you
can
associate
these
actions
with
flows
and
so
on.
C
E
C
We
are
using
a
deep
reinforcement,
learning
algorithm,
so
it
is
using
the
information
of
the
models
from
the
objective
layer
as
inputs
to
this
dr
algorithm.
So
the
parameters,
the
network
parameters,
the
discount
factor
that
this
drl
algorithm
is
using
and
they
are
being
used
from
the
objectives
layer,
but
it
also
is
monitoring
the
network
in
the
infrastructure
layer.
C
It's
really
important
that
we
see
the
that
the
dr
algorithm
and
all
the
code
are
just
generated
from
the
high
level
models.
You
don't
program
it
directly.
C
The
code
of
the
drl
algorithm
are
generated
from
the
above
layer,
so
you
can
look
for
this
layer
and
see
that
you
have
a
knowledge
base
that
keeps
learning
according
to
the
actions
executed
at
the
network,
with
the
decision
that
is
randomly
randomly
decided
and
executed
by
the
algorithm
and
according
to
the
execution
of
each
action,
it
can
verify.
If
is
the
decisions
being
more
close
to
the
modality
objectives
or
if
the
decisions
that
are
being
chosen
are
making
the
network
parameters
worse
compared
to
the
to
the
model,
to
the
objective
to
be
achieved.
C
And
if
could
the
arc
architecture
can
have
this
new
obstruction
level?
Using
these
models
to
create
the
the
modeling,
then
translated
these
models
into
network
code,
and
the
last
thing
that
I
bring
here
to
discuss
is
that
if
or
how
intense
will
be
translated
or
translated
into
network
code
or
network
rules,
how
they
will
be
integrated
with
the
monitoring
in
the
adaptation
rules
and
actions.
C
Okay,
concluding,
we
just
saw
this
proposal
from
enabling
autonomic
networking
management
in
sdn.
We
saw
how
the
combination
between
different
different
areas
of
knowledge
can
be
combined
to
provide
some
new
solution.
We
also
see
briefly
high-level
modelling
modeling
architecture
for
implementing
this
autonomic
networking,
and
we
just
saw
the
distribution
that
may
occur
so
far.
That's.
F
C
Well,
just
to
acknowledge
the
supporters
for
me
that
I'm
being
here
today,
the
future
way
that
is,
I
provide
the
travel
grant
the
federal
institute
of
lagos
that
provided
me
the
agenda
to
be
here
and
also
the
laboratory
of
data
engineering
analysis.
Where
I
do
my
research.
Thank
you
very
much.
G
I'm
I'm
sorry
developers.
G
What
I
was
wondering
is
precisely
the
smart
models.
How
complex
would
be
to
derive
them
from
the
from
the
normal
specs
we
have
or
from
the
normal
goals,
because
you
know,
I
have
no
doubt
that
this
once
you
have
this
in
place,
it
will
be
useful.
G
No
I'm
talking
about
the
learning
curve.
How
much
for
someone!
C
I
think
I
understand
that.
Well,
we
as
we
have
like
a
layer
or
a
meta
model
that
abstracts
the
complexity.
We
are
not
protocol
oriented.
We
can
generate
the
natural
crew
rules
for
any
protocol.
So,
for
example,
you
can
model
your
objectives
model.
Okay,
so
after
you
model
it,
it
can
generate
code
for
open
flow.
It
can
generate
code
for
p4.
You
can
generate
code
foreign
protocol,
so
it's
the
the
learning
curve.
If
you
I
mentioned
it
right,
the
learning
group
is
just
to
know
how
to
to
connect
the
boxes
for
for
defining.
C
What's
your
objective,
for
example,
if
you
want.
C
The
meta
you
don't
need
to
create
the
metamode.
Actually
the
network
operator
doesn't
doesn't
need
to
create
the
metamodel
he
just
used.
He
just
needs
to
use
the
the
model
above
the
concrete
syntax.
It's
like
a
model
language
like
a
uml
or
class
diagram,
but
for
having
this
class
diagram.
Do
you
agree
that
you
have
a
meta
model
describing
it
right?
C
Okay,
so
it's
a
model
that
has
a
metamodel
describing
it
so
the
metamodel
itself.
Yes,
we
need
to
develop
for
each
case
like
the
the
objects
that
you
need,
but
after
defining
it
the
network
operator
or
the
network
developer,
he
just
used.
He
just
needs
to
use
the
concrete
syntax
like
model
creating
his
class
diagram.
If
you
were
a
software
engineer,
yeah.
A
A
A
H
Okay,
this
is
shenzhou
from
china
mobile
today
I
will
present
the
draft
update
of
georgie
network
concept
and
reference
architecture
on
behalf
of
our
co-authors.
That's
like
this.
H
Draft
was
adopted
by
the
research
group
in
march
this
year
and
a
major
change
in
this
version
is
to
add
new
sections
of
enabling
technologies
to
build
digital
network.
To
reply,
the
reviewer's
comment
actually
and
including
data
collection
services,
network
modeling,
network
visualization
and
interfaces.
H
The
time
slides
will
show
the
brief
info
on
the
four
enablers
respectively
next
slide.
Please.
H
Actually,
we
have
submitted
a
dedicated
new
drops
regarding
data
collection.
I
will
give
an
intro
instruction
later
for
this
management
data,
warehouse
tech,
fast
search,
batch
data,
handling,
conflict
avoidance
and
unified
interface
for
data
exchange
should
be
studied.
Firstly,
that's
like
this.
H
H
The
second
type
is
the
watchlight
technologies
using
nfv
containers,
etc.
Microsoft,
crossnet
and
arresters
cloud
vision,
portal,
cvp,
are
successful
examples
here
we
know
the
first
two
types
of
module
measures
are
easy
to
deploy
and
suitable
for
especially
support
for
functional
and
protocol
event
validation.
However,
they
have
limitations
of
including
high
resource
consumption
for
performance,
analysis
ability
and
poor
scalability.
H
H
H
However,
we
think
that
how
does
have
some
limitations
on
extensibility
and
the
lack
of
ability
on
functional
and
protocol
evaluation,
specifically
ai
machine
learning
models,
have
low
interpretability
and
also
needs
a
relatively
expensive
date
for
training,
since
each
type
of
network
modeling
has
both
pros
and
cons.
We
believe
that
multiple
measures
can
be
in
combination
to
build
a
comprehensive
digital
twin
network
system.
A
H
Okay,
okay
among
all
master
diction
metal
modeling,
is
a
host
homelessness
direction
and
this
slide
shows
a
digital
twin
model
easily
from
the
drum
performance
modeling
of
perspective.
So
for
more
details,
you
can
refer
the
print
of
print
paper
next
slide.
Please.
H
Okay
for
network
visualization
help
your
visualization
help;
users
better
understand
the
internal
structure
of
network
and
minor,
valuable
information
hidden
in
the
network.
This
slide
lists
candidate
techniques
for
network
topology
visibility,
a
modern
invisibility
and
interaction,
interaction,
measures
respectively,
and
that's
like
this.
H
This
slide
shows
the
three
types
of
interfaces
to
build
detail
system
based
on
proposed
architecture.
All
interfaces
should
be
open
and
standardized,
but
specifically
no
spawn
interfaces
should
be
extensible
and
kind.
Candid
options
can
be
restful
and
the
rest
of
internal
interfaces
should
be
fast
and
efficient.
Candidate
options
can
be
xmpp
and
http.
3.
southbound
interfaces
should
be
light.
Weighted
candidate
options
can
be
smg
and
netcom
that
slidepiece
okay.
H
A
H
Okay,
this
is
here
I
will
present
the
soft
updates
on
data
collection
and
requirements
and
requirement
the
data
requirements
and
technology
for
digital
to
network.
That's
right,
please.
H
The
scope
of
the
draft
includes
describe
described
requirements
and
data
collection
and
for
building
digital
team
networks
and
to
provide
the
data
collection
method.
Towarding,
build
digital
twin
data,
defiant
data
repository
the
object
of
job
objective
of
this.
Software
includes
identifying
the
data,
collection
requirements
and
principles
for
dtm
and
the
coding
for
more
efficient
data
collection
methods,
suitable
for
detail
system
and
the
richer
consensus
on
selecting
data
collection
method
for
various
natural
data
net
slab
piece.
H
Oh
initially,
the
draft
was
just
a
specific
clashing
method
for
detail
team
network.
Now
we
promote
the
drop
to
to
extend
the
scope
and
to
to
general
data
caching
requirement
and
the
message
for
digital
network
is
so
insane.
So
in
this
version
we
rescued
the
draft
and
we
added
sections
of
data,
flash
requirements
of
4010
and
the
refined
text
and
made
some
auditory
changes.
H
Okay,
this
slide
shows
the
content
of
current
job
next.
H
Okay
drop
list,
six
data
collection
requirements,
so,
in
brief,
the
process
the
dedication
should
be
target
driven
and
and
on
demand
second,
is
to
use
diverse
tools
for
various
data,
and
essentially
here
we
list
some
new
inertial
directions
of
data
works
of
study.
Further.
The
third
is
requirement
is
light
weighted
and
efficient
depression,
and
some
detailed
comments
leads
here.
Nice
little
piece.
H
The
fourth
requirement
is
open
and
standard
in
interfaces
for
data
collection,
and
the
fifth
part
is
is
naming
for
caching,
and
the
last
comment
is
the
efficient
market
destination
delivery.
Next
nest
slide
piece.
H
In
this
instructor,
we
provide
the
efficient,
efficient
data
collection
solution
for
digital
network.
Current
collection
method
mainly
collect
raw
date,
raw
and
fold
it
from
physical
network
and
have
problems
of
time,
cost
insufficient
storage
resources
and
local
computational
efficiency
risk
of
bandwidth
resources
caused
by
data
transmission.
So
this
propose
an
efficient
and
the
lightweight
data
flashing
aggregation
the
correlation
correlation
method.
H
First
train
network
sends
a
structure
to
physical
network
to
collect
data
on
demand.
Then
physical
network
completes
instructions
such
as
knowledge,
equipment,
representation
and
the
next
telemetry
streams
element
pse
of
physical
network,
complex
data,
aggregation
and
correlation.
Finally,
psg
sends
a
representation
date
to
the
tree
network
is
a
shoot
procedure
next
slide.
Please.
H
H
Okay
going
next
or
we
will
further
investigate
and
categorize
the
next
date
collection
map
to
us
towards
the
wealth
data
for
building
dtn
system
and
we'll
verify
that
collection
method
on
dtm
demo
system,
and
we
will.
We
are
coding
for
more
efficient
debt
collection
method
suitable
for
digital
team
network
system
to
enrich
the
draft
looking
forward
to
comment,
stretch
or
answer
questions.
Thank
you.
A
I
I
Next
one!
Sorry!
I
have
this
so
well,
very
briefly
that,
because
we
already
normalized
the
concept
of
a
digital
twin,
it's
a
virtual
replica
of
a
physical
system
and
it
recreates
with
high
five
reality.
The
behavior
of
the
the
digital
system,
like
the
example
that
most
people
use
are,
I
don't
know
I
want
to
build
a
plane,
but
I
don't
want
to
build
the
plane.
So
I
use
a
model
to
understand
how
the.
I
I
The
input
we're
thinking
of
when
I
say
complete
description-
I
mean
the
topology
routing
configuration
if
you
want
some
specific
scheduling
policies,
if
you
are
using
some,
if
you
need
a
specific
traffic
matrix
or
you
have
a
specific
flows
or
different
traffic
models
and
the
output
right
now
we're
targeting
three
metrics.
So
that
is
the
delay,
data
and
losses.
I
I
And
in
the
draft
we
start
talking
about
the
requirements
of
this
digital
twin
next,
please.
So
the
first
requirement
is
that
it
should
be
fast
next,
please
because
well,
we
are
also
targeting
optimization
scenarios
by
network
optimization.
We
are
thinking
of
these
algorithms
that
explore
a
lot
of
different
configurations
until
they
find
an
optimal
one
or
a
configuration
that
satisfies
a
certain
objective,
and
if
you
want
to
run
these
algorithms
in
more
or
less
real
time
or
short
time
skills
you
need,
you
need
a
model
that
is
really
fast.
I
I
I
I
different
configurations
or
routing
protocols,
scheduling
policies
and
also
different
ranges
of
traffic
intensity,
and
finally,
it
has
to
be
accessible
next,
please,
and
by
accessible
I
mean
on
one
hand
being
able
to
communicate
with
existing
systems,
so
it
needs
a
way
to
plug
into
an
sdn
controller
or
different
control
and
management
systems
that
are
present
in
in
current
networks,
and
it
also
has
to
produce
metrics
that
are
commonly
used
in
network
engineering.
So
I,
for
example,
is
heading
for
delay.
I
And
not,
you
know
some
models
for
the
produce,
some
outputs
that
are
not
easy
to
understand
for
network
engineers.
Next,
please,
and
in
the
draft
we
outline
the
the
architectural
interface
of
this
digital
twin
next,
please
and
we
are
following
the
the
architecture
of
the
reference
architecture
of
the
digital
twin
concepts,
draft
and
well
in.
Actually,
you
can
see
here
now
we
have
the
physical
network,
and
we
also
that
is
connected
to
the
management
and
control,
plane
and
and
also
the
digital
twin,
is
connected
to
the
management
and
control
plane.
I
We
have
also
outlined
the
interfaces
here
next,
please
and
well
the
configuration
interface
in
the
measurement
interface.
We
have
seen
previous
drafts
that
talk
about
how
to
collect
data.
We
can
also
leverage
widely
used
idf
protocols
like
netcom
or
netflow
for
measurement
next,
please,
and
this
administrator
or
this
intent-based
interface.
We
I
mean
in
our
for
our
use
case
minimally.
It
should
just
support
running
the
digital
twin,
defining
some
optimization
objectives,
applying
some
configuration,
but
of
course
it
can.
I
I
mean
it
can
range
from
a
simple
cli
to
a
you
know:
a
state-of-the-art
graphical
user
interface
or
even
the
this
more
modern
intent-based
networking
that
there
are
some
drafts
in
the
group
also
next,
please,
but
the
one
that
we're
more
interested
in
discussing
here
with
the
research
group
is
the
interface
of
the
digital
twin,
because,
basically
the
one
that
is
not
that
clear
and
from
a
from
perspective
this.
This
should
be
like
a
request
response
interface
and
you
send,
as
inputs
the
inputs.
I
I've
said
like
three
times
before
the
topology
of
the
configuration
the
traffic
demands
of
your
network,
and
then
you
get
an
output,
three
matrices
with
the
performance
metrics
like
the
delay,
jitter
and
loss
next,
please,
but
we
think
that
this
this
requires
more
discussion.
For
example,
maybe
you
want
multi-vendor
compatibility.
You
know
you
want
to
use
a
digital
twin
for
some
from
a
specific
vendor
for
some
tasks,
or
maybe
you
need
then
to
swap
it
for
some
some
another
digital
thing
from
another
vendor.
I
Also,
we
don't
really
understand
which
control
plane
element
should
we
connect?
Should
we
connect
to
an
intern
based
interface?
Should
we
connect
to
to
an
sdn
controller
and
also,
I
said
that
the
type
of
interface
should
be
a
request
response,
but
maybe
you
want
the
publish
subscribe
model
in
which
you
are
continuously
evaluating
the
performance
of
the
network,
because
you
want
to
adjust
in
real
time.
Some
performance
parameters
should
include
inside
the
digital
twin,
the
validation
of
the
sla.
So
just
you
know
like
a
boolean.
E
E
I
I
Well,
we
already
presented
this
table
in
the
previous
meeting,
so
I'm
going
to
go
quickly
very
quickly
over
it,
because
also
we
more
or
less
saw
the
same
in
the
previous
presentation.
No,
so
the
first
thing
you
consider
when
you
want
to
build
now,
you
can
do
it
with
simulation
and
because
network
simulators
are
very
accurate
and
also
they
support
virtually
any
feature.
And
if
things
don't
support,
you
can
implement
it.
But
the
key
point
is
that
they
take
a
lot
of
time
to
run.
I
Render
to
to
this
nice
performance
next,
please,
but
why
I'm
talking
about
implementation
in
a
research
group?
Well,
if
we
are
about
to
work
with
machine
learning?
Well,
it's
a
continuously
developing
technology.
It's
really
complex
and-
and
there
are
some
limitations
that
are
not
are
different
from
what
we're
used
to
in
networking.
No,
so
I
don't
know,
maybe
you
have
a
switch
and
you
know
that
it,
the
the
backplane,
goes
at
one
time
a
bit
and
you
can
do
your
calculations
more
or
less,
but,
for
example,
with
machine
learning.
I
I
Maybe
you
have
some
legal
limitations
on
obtaining
this
data
or
it's
from
a
customer,
and
you
cannot
ask
your
customer
data
and
also
another
interesting
point:
is
that
the
usually
machine
learning
models
cannot
predict
what
they
have
not
seen.
I
So
if
I
have
a
model-
and
I
haven't
tried
it
with
samples
of
a
network
that
is
congested,
it
will
never
ever
tell
me
that
my
network
will
get
congested,
and
this
is
not
really
useful
and
in
order
to
understand
also
all
these
limitations,
we
implemented
a
performance,
digital
twin
prototype
that
is
called
raw
neterlen
and
it's
based
on
gnns,
and
we
have
open
source
here.
You
can
get
it
on
on
github
in
order
that
so
that
people
know
more
or
less
the
limitations
of
and
play
with
it
and
understand
the
imitation
of
this
technology.
G
A
I
mean
I'm
sure,
since
the
network
digital
twin
topic
will
be
under
also
later
next
presentation,
and
this
evening
there
will
be
a
lot
of
questions
for
follow-ups.
But
thanks.
H
L
We
propose
this,
the
following
methods
to
realize
the
high-fidelity
simulation
of
the
physical
flow
by
the
twin
flow,
and
it
also
needs
to
satisfy
the
three
following
the
satisfying
the
following
three
characteristics.
At
the
same
time,
which
can
be
concluded
as
the
three
consistent
the
forwarding
pieces
are
consistent
means
that
the
thing
knows
that
chin
flow
passes
through
as
the
thin
network
layer
are
consistent
with
the
physical
nodes
that
physical
flow
passes
through
as
a
physical
network
layer
and
the
second
one
is.
L
L
So
as
the
as
shown
in
the
animation
on
the
right,
please
click
the
play.
The
animation.
L
L
L
Second,
is
that
the
data
transmission
network
between
the
physical
network
and
digital
team
network,
you
are
using
the
data
nice
networking.
The
third
is
that
only
the
key
information
of
cisco
flow
is
collected,
so
some
payload
information
doesn't
need
to
be
collected.
L
So
next
slide
piece,
so
this
page
gives
the
advantage
of
the
proposed
simulated
mess
measures.
The
first
one
is
that
the
forwarding
pace,
forwarding
time
and
keyflow
information
are
consistent
between
twin
flow
and
physical
flow
and
it
can
meet
the
needs
of
various
sceneries
and,
what's
more,
is
easy
to
implement
next
slide
piece
and
then
is
the
second
chapter.
One-Way,
delay,
environment
method
based
on
digital
team
network,
and
this
one
is
based
on
the
last
one.
L
L
It
requires
special
test
package
and
time
sync
synchronization
and
cannot
test
all
the
all
network
protocols
and
it
needs
to
change
the
form
format
of
service
packets.
So
there
will
have
some
problems
in
actual
deployment
and
we
propose
the
method
to
solve
this.
Some
solve
this,
and
it
can
really
cite
no
need
to
send
environment
package
no
need
to
change.
The
fiscal
network
configuration
no
need
to
change
the
format
of
service
package
and
no
required
fiscal
network
elements
to
sports.
The
tempsic
synchronization
protocol
also
next
slide
case.
L
L
Also
when
a
flow
of
the
fiscal
network
is
input
from
the
physical
network
element
one.
You
can
see
the
figure
in
the
right
when
the
overflow
of
the
network
physical
network
is
input
from
the
physical
network
element.
One
passes
through
the
fiscal
network
elements
two
and
three
and
finally
is
output
from
the
physical
network
element
for
and
when
physical
network
element
one
receives
the
data
package,
it
will
normally
forwards
the
data
to
physical
network
element
to
and
transmit
the
data
to
ting
network
element
1
at
the
same
time.
L
At
this
time,
the
local
time
of
the
twin
network
element,
1,
is,
can
be
seen
as
a
little
t1
and
the
deterministic
network
transmission
delay
is
a
big
t1.
Since
the
arrival
time
of
the
traffic
information
recorded
by
the
twin
network
element
is
the
litho
t1
minus
the
big
t
one.
Similarly,
the
arrival
time
of
the
data
package
recorded
by
other
thing
network
elements
is
little
t
and
minus
little
t
and
the
big
tn
and
next
slide
piece.
L
L
D
D
If,
basically
within
the
digital
twin
network,
you
would
actually
have
twins
of
every
flow
of
every
packet,
and
so
maybe
you
can
explain
a
little
bit
that
the
second
thing
is:
why
would
you
not
just
measure
perform
the
measurement
in
the
production
network,
as
opposed
to
in
the
digital
twin,
who
is
concerned
with
the
measurements
in
the
digital
twin,
really
actually
want
to
have
the
actual
thing
there.
So
maybe
you
can
explain
a
little
bit
better.
Why
or
what
is
motivating
your
work
thanks.
L
Okay,
thanks
for
your
question,
and
we
think
that
by
this
way
we
can
get
an
accurately
flow
simulation
and
the
delay
can
be
married,
not.
L
K
L
D
A
K
Yeah
hi,
this
is
albert
from
the
pc,
so
I
have
a
similar
question
as
a
alexander
right,
so
I
don't
really
understand
so.
If
you
can
measure
the
real
thing,
why
do
you
need
a
digital
network
now?
This
brings
me
to
two
actual
questions.
The
first
one
is,
it
seems
to
imply
your
drafts
that
you
need
the
same
amount
of
resources
for
the
digital
twin
than
for
the
physical
network.
K
L
Yes,
the
the
main
reason
is
that
the
physical
networks
are
not
affected
and
we
just
take
the
which
we
just
made
in
the
thing
network
layer.
K
Okay,
understood
and
my
second
one
is
rather
a
common,
and
I
would
like
to
see
your
understanding.
I
have
a
feeling
that
we
have
like
a
requirement
on
digital
twins
that
we
never
put
explicitly,
but
somehow
we
understand
digital
twin,
that's
something
that
will
tell
me
what
will
happen
if
I
change
something
on
the
network.
If
I
have
a
certain
traffic,
but
it's
not
it's
more
about
the
future
and
what
different
scenarios
rather
than
what
is
happening
in
the
present,
because
in
the
present
I
already
have
my
physical
network.
K
L
A
Okay,
thanks
a
lot
daniang
for
the
presentation.
I
think
there
are
some
interesting
questions
to
address
to
clarify
some
aspect
of
your
proposals
and
I
invite
the
discussion
to
continue
offline
on
the
mailing
list
or
directly
between
the
the
participants.
A
Thanks
again
so
in
the
agenda,
we
we
were
planning
to
have
a
quick
wrap-up
on
the
network,
digital
twin
activity,
proposing
the
anarchy,
so
the
goal
here
is
in
fact
we
have
received
more
proposal
and
more
inputs
on
this
topic
in
the
nmg
in
recent
in
recent
time
in
the
past
few
months.
A
So
we
see
that
this
is
a
growing
topic
of
interest,
not
only
in
energy
but
but
also
in
other
communities,
other
groups,
and
this
is
why,
in
fact,
we
invite
people
to
attend
if
they
can
the
side
meeting
that
we
have
this
evening,
but
there
will
be
also
follow-up
discussion
on
the
mailing
list
or
by
other
means.
The
goal
is
in
fact
to
just
to
understand
a
bit
the
future
direction
of
these
different
proposals.
Energy
could
be
a
forum
for
some
of
the
discussion.
A
It's
a
research
group,
so
we
want
to
focus
on
these
research
activities.
We
see
that
there
are
other
proposals
that
are
more
towards
solidization
or
engineering
or
even
implementation,
etc.
So
these
are
very
interesting
proposals.
What
we
would
like
to
clarify
is
impact
in
the
scale
of
irtf
and
ietf.
What
could
be
the
the
landing
spots
for
this
activity
depending
on
the
type
of
activity,
and
also
maybe
to
share
information
and
coordinate
with
what's
happening
a
bit
in
other
groups
and
in
the
research
community?
A
So
this
will
be
some
of
the
point
we
would
like
to
discuss
later
on
in
the
summiting
and
again
for
for
to
make
it
available
to
anyone
also
on
the
mailing
list,
so
this
was
just
to
wrap
up
a
bit
this
this
discussion.
Thank
you,
everyone.
We
are
now
switching
to
another
topic
of
the
agenda,
which
is
on
green
networking.
We
have
two
presentation.
Also,
a
new
draft
have
been
proposed
in
different,
I
mean
for
ietf
rtf,
and
the
first
presentation
will
be
from
alex.
D
Okay,
thank
you.
So
yes,
so
this
presentation
is
basically
on
a
potential
new
topic
to
look
at
here
in
the
scope
of
of
nmrg,
and
this
concerns
the
topic
of
management
for
green
or
also
well
sustainable
networking
in
general,
there
are
two
associated
drafts
with
this
serious
challenges
and
opportunities
in
green
networking.
D
I
think
this
one
is
specifically
actually
suitable
actually
for
discussion
in
nmrg
and
there's
also
a
companion
draft
from
green
networking
metrics,
and
you
see
basically
I'm
I'm
presenting
on
behalf
of
a
number
of
co-authors
next,
please
so
the
question:
why
green
networking
and
why
management
for
it?
Well,
I
I
think
I
think
everybody
is
aware
that
reducing
the
carbon
footprint
is
one
of
mankind's
grand
challenges
and,
of
course,
networking
applications
have
been
a
key
enabler
in
this
by
reducing
travel
by
enabling
remote
work
etc.
K
K
D
Part
being
applied
to
network
providers
as
well
so
clearly
something
to
think
about.
There
are
well
various
contributors
to
network
energy
efficiency
today
and
basically
efficient
usage
of
efficiency
and
so
forth.
There
are,
of
course,
general
hardware,
advances
benefits
from
from
moore's
law
deployment
factors,
do
you
basically
deploy
in
the
colder
temperature
and
so
forth,
advances
in
at
the
at
the
lower
layers,
antenna
technology
and
so
forth.
Many
of
those
factors
are
well
important
and
they
are
big
factors.
However,
they
are
kind
of
like
outside
what
we
itf
and
so
forth.
D
They
could
control,
but
what
about
network
and
management,
specific
factors
and
where
basically
ietf
and.
D
Motivation
for
this
next,
please
so
in
some
observations
why?
I
believe
we
believe
this
is
specifically
also
a
management
topic
is
for
one.
Basically,
management
in
many
cases
involves
questions
of
optimization
already
right.
So
basically,
whether
you
see
many
research
papers
on
how
to
place
vms,
how
do
you
place
vnfs?
D
How
do
you
plan
routes
and
segments
and
paths
optimize
according
to
various
various
ways,
and
always
busy
moderating
different
types
of
trade-offs,
which
are
involved
and
well
in
this,
of
course?
Well.
Energy,
usage
and
energy
efficiency
and
so
forth
is
yet
another
parameter
that
can
be
optimized
and
that
perhaps
has
not
been
so
much
the
focus
in
the
past,
but
really
perhaps
it
should
be,
and
also
in
many
cases
management
involves
control
loops.
D
D
The
circle
closes
and
and
for
those
control
loops,
those
apply
clearly
also
to
optimizing
energy
efficiency
and,
of
course,
short
time
scales
are
are
required,
but
one
thing
this
basically
sets
up.
D
And
finally,
another
aspect
that
we
believe
actually
can
be
can
be
leveraged
concerns.
The
fact-
and
this
actually
makes
it
also
more
of
a
challenge
that
communications
is
or
the
incremental
energy
use
in
communications
is
not
is
not
linear.
So,
basically,
when
you
think
about
it,
basically,
when
the
to
transmit
the
first
bit,
you
need
to
power
up
an
equipment
that
basically
creates
well.
This
requires,
of
course,
a
lot
of
energy
usage
right
there.
So
basically,
the
cost
of
the
first
bit
is
very
high
versus
subsequent
ones.
D
Then
we
see,
if
you
utilize
it
higher,
it
really
doesn't
make
much
of
a
of
a
difference,
and
this
suggests
questions,
of
course,
how
to
leverage
and
exploit
that
that
there
are
large
potential
gains,
for
instance,
by
being
able
to
idling
resources
and
taking
them
offline
and
so
forth
when
they're,
not
when
they're
not
needed
and
doing
this
basically
in
rapid
time
scales
and
support.
So
this
is
certainly
one
potential
thing
to
explore
to
manage
towards
that
and
next,
so,
basically
in
the
in
the
draft.
D
So
the
the
the
first
draft
concerns
the
problem
statements
or
the
challenges
and
opportunity
opportunities.
We
decided
to
structure
these
challenges
and
opportunities,
basically
into
four
different
areas,
starting
basically
with
what
you
can
do
at
the
equipment
or
individual
device
levels,
device
level
and
then
looking
at
okay.
What
could
you
do
at
the
protocol
level
at
the
level
of
the
network
as
a
whole
and
then
finally,
of
the
overall
network
architecture?
D
Next
next,
so
so
at
the
device
and
equipment
level.
This
is
basically
where
perhaps
many
of
the
most
obvious
things
immediately
lie,
and
many
of
those
aspects,
of
course
again
concern
power.
Efficient
hardware,
eco-friendly
materials
and
so
forth,
important,
but
outside
ietf
scope,
perhaps
more
getting
closer,
is
basically
other,
for
instance,
things
such
as
energy
saving
policies.
This
is
very
common
endpoints
you
have
power,
saving
modes
etc,
but
what
about
equipment
inside
the
network?
D
We
don't
really
see
much
of
that
here
there
and
the
question
is:
what
would
those
types
of
things
look
like
and
then,
basically
and
of
specific
interest?
Also
in
the
prerequisite
for
a
lot
of
things
is
how
do
you
provide
visibility
into
the
current
energy
usage?
So
basically,
how
can
you
even
assess
that
and
basically
validate
with
whatever
things?
Whatever
aspects
you
are
applying
yeah,
whether
how
well
they
actually
work,
and
so
basically
this
requires
instrumentation
inside
the
network.
D
This
is
something
that
is
relatively
immediately,
I
guess
actionable
and
it
requires
also
energy
metrics
of
the
right
types
of
energy
energy
matrix
to
base
these
things
on,
and
I
see
dane,
has
a
question:
do
we
take
questions
now,
all
right?
D
Okay,
so
then
the
next
aspect
concerns
the
protocol
level,
and
this,
basically,
is
the
question.
Well,
what
can
be
done
well
is
are
actually
several
aspects
of
this,
so
one
thing
one
can
think
about
about.
Well
what
would
be
needed
in
terms
of
protocol
support
to
enable
certain
mechanisms
or
methods
to
save
energy.
So,
for
instance,
one
of
the
things
I
mentioned
earlier
was
well
what
if
you
could
take,
for
instance,
take
resources
offline
when
they're
not
really
utilized?
Obviously,
there
are
controversies.
D
What
does
it
make
lead
for
a
longer
discussion,
but
assuming
basically
that
you
can
do
that?
Well,
today,
one
of
the
issues
is:
it's
often
not
practical
due
to
the
time
scale.
D
D
D
D
D
How
could
you,
for
instance,
apply
traffic
adaptation
that
you
basically
maximize
efficiency
based
on
whether
whether
it's
beneficial
to
have
smooth
transition,
often
good
to
avoid
congestion
and
collision
and
support,
on
the
other
hand,
sometimes-
and
I
think
we
see
this
particular
points
in
low
power
environment-
it
is
actually
beneficial
to
transmit
bursty,
so
basically
having
some
periods
of
silence
and
then
basically
transmitting
a
diversity
thing,
and
so
so
these
all
of
these
things
are
over
thinking
about
next
one.
Please.
D
And
well
then,
at
the
network
level
there
are
things
to
think
about
it.
Basically,
then.
Well,
if
energy
is
a
cost
factor
or
power,
efficiency
is
well.
Are
there
foreign
certain
energy
related
control
protocol,
extensions
that
would
be
needed
in
order
to
find
the
make
sdn
controllers
or
make
distributed
control,
claims
and
support?
D
Aware
of
that
and
and
leverage
that,
similarly,
topics
of
energy
aware
routing
energy,
aware
path,
configuration
which
allow
to
assess
forcing
the
carbon
intensity,
one
flip
away
or
another
offer
path,
and
then
basically
optimize
the
network
to
minimize
the
overall
footprint,
for
instance,
to
be
aware
to
steer
traffic
among
if
there
are
different
alternatives,
steer
them
along
the
one
which
is
greener
by
some
definition.
Those
are
basically
topics
that
are
worth
doing:
spending
research
on.
D
Similarly
well,
I'm
I
was
mentioning
basically
the
the
issue
well,
what
if
we
could
force
an
idol
and
take
offline
and
online
resources
again,
so
this
would
be.
Basically,
this
is
a
topic
of
resource
weaning
schemes
at
the
network
level,
and
then
there
are
other
aspects
right
so,
for
instance,
again
well,
just
like
there's
placement
of
virtual
machines.
D
D
For
a
question,
thank
you
all
right.
Okay,
I
have
one
minute:
yeah
one
minute
will
be
fine,
all
right,
okay,
good,
so
the
next
please,
I'm
almost
done.
I
think
so.
One
of
the
subjects
is
there
as
the
as
a
one
of
them
concrete.
First
steps,
basically,
is
the
companion
draft
to
the
challenges
opportunity
concerning
network
energy
metrics,
because
again
it
starts
with
providing
visibility
and
yeah.
D
So
basically,
there's
this
draft,
which
defines
basically
or
but
you
attempt
to
set
a
define
of
of
matrix
according
to
different
well
associated
with
different
aspects
right,
the
the
the
related
to
the
equipment,
but
also
related
to
flows
related
to
paths
and
related
to
the
network
at
large.
Again.
This
is
maybe
not
so
much
a
research
topic,
so
more
actionable.
So
perhaps
this
one
doesn't
belong
into
nmrg,
but
I
just
want
to
put
it
here
next
one-
and
I
think
that's
the
last
one
so
yeah.
D
So
basically,
the
intent
here
was
to
to
raise
awareness
and
gain
critical
mass
on
this
as
a
topic
and
look
for
collaborators
also
encourage
more
research
into
that
area,
and
we
are
also
looking
for
the
proper
landing
spot.
It
seems
that,
at
least
for
the
research
opportunities
and
challenges,
nmrg
may
be
the
right
place,
and
for
this
yeah
we
would
ask
for
feedback.
So
thanks.
M
And
yeah
hi
dan
bogdanovic
in
there
are
two
problems
with
energy
and
telecommunication.
M
One
is
the
optics,
that's
the
biggest
consumer
in
the
in
in
the
networking
today,
and
you
cannot
turn
lasers
on
and
off
on
demand
because
they
take
a
few
seconds
to
warm
up
and
start
propagating
the
information
throughout
the
network.
M
This
is
where
majority
of
your
energy
consumption
is.
If
you
look
at
any
of
those
nice
boxes
that
the
vendors
are
doing,
if
it's
an
optical
box,
suddenly
box
way
stuns
and
uses
kilowatts
of
power,
but
all
that
power
goes
into
the
optics.
So
that's
an
optical
problem
and
material
science
problem
that
I
don't
think
we
are
the
right
place
to
solve,
but
that.
M
You
know
it
doesn't
mean
that
your
shortest
path
from
is
the
best
path
from
the
energy
perspective,
because
you
have
to
know
how
your
energy
is
being
routed
throughout
the
system
and
where
is
it
coming
from?
So
you
have
to
create
a
much
more
complex
semantic
that
we
don't
have
the
expertise
as
well.
We
have
to
talk
to
the
energy
distributors,
how
they
are
doing
that
and
how
they
have
to
route
the
energy.
You
know
in
order
to
provide
the
demand
for
that.
M
D
So
yeah,
so
so,
very
briefly,
regarding
those
comments,
so,
first
of
all,
clearly
there
are
big
try
to
indicate
this
in
maybe
only
a
small
slice
that
is
within
our
control
of
this
and
clearly
yeah
power.
Consumption
from
laser
ac
is
a
big
big
thing,
but
this
I
I
believe
actually
this
should
not
prevent
us
from
looking
at
the
things
that
we
can
control,
but
clearly
we
need
to
know
which
ones
we
can
control
and
which
one
not
regarding
the.
D
M
No,
I'm
I'm
not
suggesting
that
that
we
are
trying
to
do
some
decisions
without
having
the
inputs
from
the
material
science
guys,
as
well
as
from
the
energy
distributors,
because
you
have
to
take
both
of
those
into
the
account
to
be
able
to
make
the
to
create
the
right
semantics
that
will
be
enabling
you
to
do
the
proper
routing
decisions.
Yeah.
D
But
for
the
second
thing
I
know
we
have
limited
time,
I
think,
but
very
clearly
there
are
many
things
that
you
can
take
into
account,
but
basically
making
this
awareness
as
an
additional
factor
to
trade
off
is
a
is
a
worthwhile
thing
to
consider.
I
believe
it.
Clearly,
there
are
many
complexities
and
it
may
not.
Not.
Everything
may
be
actionable
that
we
can
define
a
standard
right
away.
This
is
precisely
why
I
believe.
Actually
this
is
a.
This
is
a
good
topic
for
for
irtf,
because
things
of
this
nature
make
a
lot
of
that.
K
K
So
thanks
for
your
presentation,
I
think
that
the
topic
is
needed,
interesting
and
relevant.
K
While
listening
to
your
presentation,
I
I
was
wondering
something,
and
maybe
it's
a
suggestion
for
you
to
move
forward
right
is
how
much
so
if
I
want
to
transmit
one
single
packet,
I
need
to
first
build
a
set
of
networks,
network
equipment
right.
I
need
to
actually
build
the
thing
and
then
I
will
spend
energy
in
lasers
and
so
on,
and
I
was
wondering
what
is
the
amount
of
energy
it
takes
to
build
the
actual
network
equipment,
and
maybe
that's
something
that
it's
it's.
K
I
know
it's
out
of
the
scope
of
idea,
but
I
think
that
it
is
important
to
have
at
least
to
understand.
The
scale
right
is:
is
operation
of
networks,
ten
percent
of
the
whole
energy
consumption,
of
building
a
network
and
operating
a
network,
or
is
operation
just
ninety
percent
of
building
the
network
right
and
operating
it.
That's
just
a
subject:
okay,
I.
D
N
Bernoullis,
so
that's
an
interesting
topic
right.
You
observe
energy
etc,
but
I
want
to
remind
people
that
we
had
a
working
group
called
iman
energy
management
in
the
past.
It
was
about
having
power
states
in
the
network,
exactly
for
what
you
were
mentioning
it's
sunday,
so
I
want
to
get
a
lower
power
state
or
you
know
people
are
on
strike
or
whatever.
N
N
N
If
it
takes
40
seconds
to
line
cartoon
comes
up,
then
at
that
point
in
time
I
had
no
operator
who
actually
wanted
to
do
this,
because
you
are
selling
expensing
stuff
for
which
we're
going
to
say,
going
to
put
them
offline
for
some
time
and
the
seo
to
come
back.
Whenever
we
speak
about
a
couple
of
milliseconds
of
rerouting,
it's
not
the
same
scale,
so
we
were
stuck
at
the
use
case
issue.
D
O
It's
just
fun
to
say
that
this
is
a
very
interesting
topic
and
we
should
work
on
that.
I
mean
it's
obviously
important
for
for
for
the
for
the
whole
society
and
we
should
do
our
part.
I
just
wanted
to
draw
your
attention
to
another
activity
that
that
is
coming
up.
O
The
iab
is
planning
to
launch
a
workshop
on
sort
of
environmental
impacts
of
the
internet
and
energy
consumption
and
and
so
on,
sort
of
has
a
you
know,
broader
scope,
perhaps
than
than
what
you
were
talking
about,
but
it's
another
venue
where
some
discussions
could
happen.
It
was
a
one-time
workshop
of
course,
so
it
should
not
stop
sensible
research
group
and
working
group
efforts
elsewhere.
O
O
Look
at
metrics
look
at
needs
for
new
work
where
they
will
be
most
beneficial
and
look
for
solutions.
So
that's
the
scope
of
the
workshop
announcements.
K
D
P
Sorry,
thomas
eckert
yeah,
so
I
think
with
with
with
e-men
the
the
best
use
case
I
saw
was
simply
the
poe
management
right
so
using
the
network
as
the
infrastructure
to
really
manage
physically
a
lot
of
interesting
devices,
cameras
lightning
in
in
building
and
so
on.
Right.
So
there
was-
and
I
think
still
is
the
whole
evolving.
P
You
know
new
in-building
network
architectures,
like
you,
know,
cheap
switches
that
do
lightning
power
everything
under
the
ceiling.
So
so
I
think
that
you
know
still
is
something
to
be.
You
know
taking
much
more
use
of
slowly
evolving
right,
so
it's
kind
of
these
industries
that
operate
a
decades
cycles.
P
That
was
the
trick
right.
So
all
right,
okay,
instead
of
looking
forward,
there's
always
a
good
thing
in
looking
backward
and
trying
to
figure
out
what
we
have
done
so
far
next
slide.
So
this
is
the
third
draft
and
the
name
is
down
there
so
right.
P
Look
for
gaps,
close
them,
look
for
new
areas,
standards
and
research
and
yeah,
trying
to
also
you
know,
market
the
ietf
and
its
work
in
this
area,
whether
it's
for
sustainability
or
what
the
example
I
was
just
giving
right
so
all
that
stuff
for
the
draft
itself.
It
seems
to
be
most
simple
as
an
individual
submission
if
there's
more
interest.
P
Maybe
we
get
some
other
form
of
sponsorship,
but
I
think
they,
when
you,
when
you
see
the
broad
range
of
topics
being
covered,
it
seems
like
trying
to
run
for
itf
consensus
on
that
content
will
make
it
rather
less
flexible,
for
you
know,
individual
chapters
with
individual
authors
and
good
opinions
to
be
brought
together
so
and
we'll
get
to
that.
You
know
call
for
participation
later
right.
So
so
obviously
the
itf
has
never
done
anything
for
energy
right.
So
oh
wait!
A
second!
P
There
are
these
iot
low-power
networks.
Okay,
so
we've
done
that
and
oh
wasn't
there
one
more
thing
and
one
more
thing,
so
you
you
really
when
you
start
looking
through
all
the
itf
repositories
from
one
area
to
the
other,
it
reminded
me
of
monty
python's
life
of
brian.
You
have
to
find
that
video
clip
which
says
what
have
the
romans
ever
done
for
us
and
it's
kind
of
yeah
pretty
much
everything
right
so
which
obviously
for
energy
isn't
true,
but
it's
it's
really.
P
When
you
see
the
next
slides,
it's
a
lot
like,
let's
next
slide,
okay,
so
the
scope
of
the
document
is
not
only
the
things
that
people
said
were
intentionally
for
energy
and
and
so
on.
P
But
what
I
felt
actually
relates
to
energy
right,
so
a
lot
of
stuff
that
is
incidental
and
that's
really
when
you
start
thinking
about
what
has
the
iatf
and
the
internet
done
to
impact
energy
consumption
on
the
planet,
and
that
is
really
pretty
much
everything
that
the
ietf
has
done
from
the
beginning,
because
it
has
led
to
what
we
now
call
digitization
of
non-digital
pre-network
workflows,
which
is
really
very
much
based
on.
You
know
the
foundations
of
the
packet
networking
that
the
internet
was
the
first
and
biggest
one
to
explore
in
that
respect.
P
Right
so
from
all
the
applications
like
mail
replaced
in
postal
mails
group
communication,
then,
ultimately,
thousands
of
applications
that
were
done
without
digital
through
http
html
frameworks
right
and
so
when
you,
when
you
look
at
the
success
factors,
then
it's
it's
very
much
related
through
saving
through
scale
right.
So
the
joule,
the
energy
per
bit
cost
has
been
going
down
because
we
have
an
architecture
with
the
internet
that
is
built
for
scale
right.
P
If
you
would
try
to
replace
the
internet
with
many
many
parallel
networks,
four
different
applications
that
were
all
smaller
scale,
you
would
end
up
with
a
lot
higher
energy
utilization
right.
So
there
are
various
aspects
of
the
internet
architecture,
the
ietf
protocols
that
have
been
contributing
to
this.
You
know
lowest
cost
heights
highest
scale,
so
the
datagram
with
the
multiplexing,
the
end-to-end
transport.
So
I'm
going
through
these.
P
You
know
architectural
foundational
parts
of
the
internet
architecture
relating
them
to
in
the
energy
consumption,
the
convergence
of
network,
obviously
being
the
most
easy
to
understand
right,
you
started
with
data
networks.
You
had
separate
voice
networks.
You
had
separate
video
networks,
we
did
diffserv,
we
did
inserv.
P
P
We
can
go
faster
through
this,
so
the
the
energy
saving
through
sustainability-
that's
another
interesting
taxonomy
thing
that
we
need
to
think
about,
because
there
are
a
lot
interesting,
additional
metric
aspects.
We
need
to
look
into
right,
so
there
is
a
difference
between
good,
renewable
and
bad
energy
that
we
need
to
take
into
account,
and
we
also
need
to
compare
then
the
saving
that
we're
getting
to
the
pre-digital
solution.
P
For
example,
the
fact
that
we've
all
been
aware
of
that
you
know
traveling
in
the
airplanes
on
the
same
amount
of
energy
consumption
is
worse
than
doing
the
same
thing
on
the
ground.
So
that's
where
applications
like
tele
collaboration,
what
we've
been
doing
in
the
itf
ourselves
come
in
and
a
lot
of
foundational
technologies
like
rtc
web
next
slide.
P
Okay,
so
then
there's
the
whole,
you
know
page
about
exactly
that:
low
power,
lossy
networks,
where
we
have
many
many
working
groups,
so
that
is
being
covered.
I'm
not
going
to
go
through
details
to
that.
The
higher
layers
of
that
are
called
constraint,
nodes
and
networks.
Also,
several
working
groups
with
good
work
on
that
next
slide.
P
And
then
there
are
specific
sample
technology
enablers
that
even
recently
we
had
been
trouble
and
opportunities
to
leverage
right
so
sleepy
nodes
is,
is
a
core
technology
to
optimize
through
specific
protocol
operation,
the
ability
to
run
on
battery
or
energy
harvesting,
and
then
my
my
personal
friend
friending
me
in
this
case
ip
multicast,
so
those
technologies
are
covered.
Then
the
energy
production,
consumption
management
network,
smart
grid
and
the
even
more
cool
use
case
of
the
synchrophaser
network.
P
And
then
the
e-main
we
mentioned
that
and
finally,
the
power
awareness
in
forwarding
in
routing
protocols.
That's
what
benoit
mentioned
as
where
we
stopped,
because
we
kind
of
didn't
have
you
know,
I
think
the
tool
set
to
go
further.
I
think
we
have
some
of
that
tool
set
now,
so
we
can
look
into
it
again,
sdn
or
some
of
the
anima
work
that
we've
been
doing,
I
think,
will
help
a
lot
for
the
resilience
we
need
so
that
we
can
low
power
networks
better,
there's
just
a
little
bit
of
the
gaps.
P
Just
I
mean
this
is
just
trying
to
capture
what
we've
done,
not
what
hopefully
we
can
do
so
next
slide,
which
really
brings
us
to
you
know
the
call
fraction
in
terms
of
please
read
it
comment
on
it
and
even
even
better
so
contribute
to
it.
You'll
see
a
lot
of
chapters
where
you
may
be
the
expert
of
of
any
of
them
or
you
may
be
missing
chapters.
So
hopefully
this
can
become
a
community
effort.
P
We
can
get
a
mailing
list
if
the
one
that
we've
just
randomly
been
looking
for
is
not
the
best
one.
So
there
is
this
old:
reducing
energy
consumption
with
internet
product
called
exploration,
called
recipe,
and
so
that
might
be
good
mailing
list
to
to
revive
after
it's
been
dormant
for
10
years,
because
that
was
about
the
time
when
we
stopped
doing
all
this
good
work
and
start
discussing
energy
related
work
again,
but
of
course
nmrg
everything
needs
to
be
managed
right.
So
that's
why
such
a
broad
topic
is.
P
Is
I
thought,
from
our
perspective
very
good
to
be
you
know
it's
it's
the
new
vertical
you
know
for
for
all
of
of
management
right,
so
we
had
security.
We
need
to
management.
Now
we
have
energy.
We
need
to
manage
that.
So
it's
also
a
vertical
topic
for
this
group.
Thank
you.
B
Thank
you
very
much
so
question
and
command
will
be
taken.
Offline
next
presentation.
E
Okay
good
morning
nice
to
meet
you.
My
name
is
hong:
I'm
working
for
petty
university
in
south
korea,
so
first
things
to
give
a
chance
to
present
next
page.
Please
yeah!
This
is
history.
For
our
last
meeting,
we
submit
the
attack
not,
but
at
that
time
you
don't
have
a
time
to
pretend.
So
this
is
the
first
time
to
present.
Okay
next
page,
please!
E
So,
as
I
know
that
many
person
have
some
interest
of
the
ai
and
ai
technology,
but
as
a
expert
of
the
network
and
telecommunication,
it
is
not
easy
to
find
a
item
for
standardization
and
there
are
some
changement.
There
are
some
changes
in
the
ai
area,
for
especially
the
deployment
ai
services,
for
example,
in
the
before
we
are
focusing
on
training
running,
but
nowadays
we
are
focusing
on
implants
prediction.
E
So
if
we
think
about
the
implants
on
not
only
high
performance
server,
but
also
small
hardware,
micro
controller
for
performance
cpu
and
ai
chipset,
the
optimal
target
device,
the
reason
is
the
cost.
So
if
you
utilize
the
high
performance
server,
okay,
it
is
good,
but
the
cost
is
very
high.
So
if
you
think
about
the
cost,
we
are
also
think
about
the
raw
performance,
the
hardware.
E
So
in
this
document
we
show
some
conclusion
of
system,
for
example
ai
model.
E
E
There
are
problems
and
cones
and
also
another
configuration
is
the
serving
framework.
Okay,
you
can
utilize
the
web
framework
to
provide
ai
services,
but
nowadays,
for
example,
tensorflow
serving
purchaser.
That
is
the
targeting
framework
for
implement
system.
So
you
can
utilize,
ai
or
web
framework
or
specific
solving
primal
and
communication
method,
for
example
rest
where
you
can
utilize
gel
pc
and
design
or
device
capacity,
for
example,
cpu
length
or
the
capacity
of
network
interfaces
and
influence
data.
E
Yes,
this
is
a
genetic
procedure
of
ai
services,
so
I
I
know
that
many
persons
are
very
familiar
of
this
genetic
procedure.
The
first
step
is
to
question
and
store
data,
and
the
next
step
is
analyze
and
pre
process
of
data
and
the
next
time.
The
next
step
is
train
air
mode
and
another
model
is
to
deploy
well
impressed.
Air
modern
and
the
final
step
is
to
monitor
and
maintain
accuracy.
E
E
So
if
we
collect
the
data,
the
internet
or
network
could
be
or
used,
but
in
the
train
air
model
or
analytic
preprocess
data.
Then
there
are
no
connection
with
the
network.
It
can
be
done
only
local
server
local
area,
but
if
we
think
about
deploy
or
influence
air
model
and
monitor
and
maintain
accuracy,
then
I
guess
that
there
are
something
to
do
for
the
point
of
the
internet
for
the
point
of
network,
because
we
can
utilize
it,
we
can
deploy
p
system.
You
know
distributed
approach,
okay,
next
page,
please.
E
E
But
if
you
divide
this
action
in
the
second
figure
or
you
can
put
the
server
module
ai
services
in
the
cloud
server,
then
you
can
ask
to
aisp
to
the
server
model
in
the
cloud.
So
in
this
case
there's
some
network.
There
are
some
communication
issues
and
the
third
figure
is
to
a
important
service
on
each
device.
E
Yes,
the
second
figure
and
set
the
third
figure.
There
are
same
and
different
point,
for
example,
in
the
second
figure,
the
ai
implant
service,
on
a
cloud
server
that
utilize
the
high
performance
server
in
a
cloud,
but
in
the
third
figure
ai
implant
service
on
each
device,
they
utilize
a
little
light
or
small
device
in
the
edge
network.
So
it
is
the
difference.
E
E
So
until
now,
if
we
think
about
ai
services
or
a
system,
the
main
objective
is
the
accuracy
of
the
model,
but
if
we
think
about
cost,
while
performance
they're
not
only
accuracy
of
a
model
but
another
point,
for
example,
ratings
of
services
where
network
therapy
and
resource
utilization
are
also
important
point
so
to
satisfy
this
objective
of
ai
services.
When
we
must
consider
point
is
to
air
mode,
as
I
said
that
there
are
two
or
three
kinds
of
the
modem,
for
example:
heavy
air
model
or
lightweight
air
model.
E
So
if
you
use
heavy
air
mode,
then
accuracy
is
good,
but
the
ratings
time
is
not
good.
There
are
pros
and
cons,
and
if
we,
the
second
point,
is
solving
framework.
As
I
said,
there
are
two
kinds
of
the
serving
primer,
then
one
is
the
web
based
serving
frame
up
and
the
other
is
for
serving
targeted.
For
example,
tensorflow
serving
word
touch
server.
E
Yeah,
I
think
that
the
good
solution
is
to
using
serving
targeted
serving
framework,
but
if
you
utilize
serving
target
serving,
then
there
are
some
requirements,
for
example
high
performance,
cpu
or
highest
performance
hardware,
so
also
there
are
problems
and
cones
for
regarding
solving
framework
and
the
other
consideration
is
communication
method.
E
I
know
that
many
person
are
familiar
to
rest
or
grpc
in
the
same
in
the
ai
system.
So
if
you
utilize
the
little
peak
altera,
then
it
is
better
to
utilize
the
grpc
yeah
and
the
other
consideration
is
the
machine
capacity,
for
example,
cpu,
ram
network
interface,
etc,
and
the
final
construction
is
to
influence
data.
E
For
example,
the
data
is
through
relative,
real-time,
where
torpedo
is
ratio
pitch
water
data
is
secure
and
well
the
data
is
non-secure,
so
I
think
that
there
are
other
considerations
to
deploy
air
services,
but
now
we
are
finding
this
kind
of
configuration
to
implement
ai
system.
Okay.
Next,
please,
please.
E
So,
in
the
left
side
it
is
the
machine,
locating
ai
services
and
the
right
side
is
a
more
simple
point:
air
services.
Yes,
you
can
utilize
the
two
model
in
the
same
server
where
you
can
divide
this
model
in
a
disability,
so
the
right
module
can
be
located
in
the
server
or
edge
etc.
So
you
can
change
it.
His
configuration
by
changing
your
ip
address
or
portal
number
okay,
next
page,
please
so.
E
Yes,
this
is
one
of
the
experimental
results
of
our
object
detection
service
in
each
services.
So,
as
you
can
guess
in
the
experiment
in
the
cloud,
server
is
very
good.
The
everyday
time
is
to
zero
to
five
seconds,
but
the
yes.
This
is
the
reference
time,
so
the
edge
device
is
not
good
and
the
local
device
is
also
good.
So
this
is
only
one
example
of
the
latency,
but
if
you
think
about
accuracy
where
the
two
well,
that
thought,
what
is
your
tradition?
E
You
can
many
cases
to
combine,
but,
for
example,
to
consider
which
item
is
important.
Okay,
next
page,
please
yeah!
So
right
now
I
want
to
ask
some
the
movement,
because
it
is
the
first
and
the
local
version,
so
it
is
the
initial
version,
so
I
hope
in
next
or
next
meeting
we
will
bring
enhanced
results
and
share
our
results
and
our
understanding
with
yours
and
promote
some
of
the
activity
in
this
animals.
So
thank
you.
B
B
Okay,
well,
I
will
give
now
a
quick
update
on
the
document
about
the
research
challenges
of
coupling
ai
and
network
management.
So,
as
you,
as
you
know,
it
was
before
shared
a
google
document,
so
these
are
already
different
iterations,
but
here's
a
zero
version
of
the
of
the
draft.
So
next
slide,
please
so,
basically
what
we
have
done
so
far.
We
have,
of
course,
transforming
to
the
right
format,
of
course,
with
the
help
of
five
key
people
that
you
have
seen
on
the
previous
slide.
B
Actually
because
at
the
beginning
there
are
a
lot
of
contributors
that
are
listed
in
the
acknowledgement,
of
course,
but
we
need
to
to
shorten
the
main
editor
team.
We,
of
course
we
integrate
different
feedbacks
that
you
already
received
or
the
previous
version.
We
changed
a
bit
the
the
title
as
well,
also
to
include
body,
let's
say
the
two
ways
of
ai
m
and
track
management,
and
although
we
identify
there
are
some
challenges
that
you
have
identified
to
be
important,
where
there
was
no
content,
and
so
we
we
had
the
content
now
next
slide.
B
Please
so
here
are
the
major
idiots.
So,
basically,
we
had
a
new
constraint
regarding,
oh,
we
can,
let's
say,
identify
and
what
what
characterize
a
difficult
problem,
network
management
and
what
type
of
constraint
and
certain
constraints
is
more
characteristic
was
able
to
have
more
time,
efficient
solutions
that
we
extended
to
be
more
cost
effective
solution
in
general,
not
only
time
efficient,
so
all
type
of
actually
cost
can
include
all
the
energy.
Of
course.
B
We
also
extend
the
description
of
possible
scenarios
and
yeah
metroid
that
regarding
challenges
was
we
had
these
challenges
to
have
the
human
in
the
loop
or
when
using
ai
in
in
the
context
of
transport
management.
Of
course,
next
slide,
please
so
you're,
just
a
summary
of
the
of
the
current
document.
Of
course
you
can
see
that
there
are
these
three
main
categories
of
challenges.
What
first
one
is
ai
techniques
for
network
management,
secondary
network
data
is
input
for
machine
learning
algorithm.
B
So
what
are
the
next
steps?
So,
of
course,
as
it
was
already
with
a
different
iteration,
the
document
somehow
is
is
quite
mature,
but
of
course
we
also
still
need
your
feedback.
We've
already
identified
some
changes
that
you
have
to
to
to
be
done,
in
particular
regarding
distributed,
ldi
or
call
of
duty
and
the
integration
of
lightweight
ai.
B
There
are
some
discussion
within.
Let's
say
the
idrt
are
about
regarding
to
highlight
a
bit
more
the
different
types
of
problem
regarding
the
the
type
of
data
in
terms
of
labeled
or
unlabeled
data,
knowing
that
maybe
in
network
management,
we
have
more
unlabeled
data,
so
that
also
some
things
that
we
have
to
keep
in
mind.
B
Where
are
addressing,
let's
say
using
ai
in
our
domain,
there
are
some
distribution
of
adding
some
legal
or
regulatory
aspect
with
the
use
of
ai,
of
course,
again
in
the
context
of
network
management
and
networking,
and
so
now,
yes,
you
are
requesting
your
feedback,
maybe
norm
and
the
mailing
list.
But
of
course
we
have
some
questions
regarding
the
value
of
the
document
in
general
for
the
group
for
the
community,
the
program
absolutely
common.
B
Do
you
think
in
terms
of
presenting
the
different
challenges
and
what
are
the
important
challenges
that
maybe
gaps?
Is
it
right
or
not?
B
Do
we
have
missed
something,
although
this
is
very
important
first,
you
say
so
we'll
miss
a
very
big
challenge
and
also
the
granularity,
because
it's
hard
to
have
something
kind
of
exhaustive,
really
the
challenges
to
cover
some
everything.
But
of
course
we
don't
want
to
to
dig
too
much
into
a
technical
level.
Again,
we
don't
to
want
something
specific
to
use
case
or
something
like
that
should
be,
and
that's
it
for
the
time.
F
Thomas
clark
from
swisscom,
so
first
of
all,
thank
you
very
much
for
the
document.
I
think
it's
very
important
and
speaking
of
the
network
operator,
who
is
collecting
a
large
amount
of
data
from
the
network
and
also
developing
anomaly,
detection
like
to
give
feedback
on
section,
7,
2
and
7
3..
F
I
recommend
to
do
references
on
rfc
9232,
which
is
the
network
telemetry
framework
and
especially
describe
a
little
bit
regarding
network
data
modeling
and
for
the
anomaly
detection
of
aiml
part.
I
suggest,
with
reference
to
the
data
mesh
architecture,
which
is
nowadays
currently
being
used
in
the
industry
and
they're,
especially
on
source
aligned
data.
So
preserving
the
the
format
from
the
network
and
also
aggregates
aggregates,
helps
to
reduce
the
amount
of
data.
B
A
So
now
we
have
our
last
presentation
of
today,
which
is
a
network
measurement
intent,
an
ibn
use
case
and.
B
J
J
So
due
to
time
limitation,
we
will
not
cover
all
of
them
in
this
presentation,
we'll
just
take
a
few
of
them
for,
for
example,
and
if
you
are
interested
in
the
drops,
you
can
read
our
updated
version
on
data
tracker,
so
the
first
one
first
question
is
regarding
about
the
sampling
rate,
so
the
sampling
rate
will
be
constant
or
will
be
that
will
be
constant
or
changeable
due
to
different
requirements
and
algorithms.
J
The
second
one
is
about
the
how
to
ensure
that
the
environment
resource
meet
the
requirements
so
in
our
ibm,
ni
system,
so
the
the
the
measurement
result
will
be
ensured
by
the
closed
loop
verification.
J
The
assessment
module
will
determine
if
the
result
is
acceptable
and
give
the
feedback
to
the
policy
model
to
modify
to
modify
the
policies.
So
next
slide.
Please.
J
And
the
third
question
is
about
the
difference
between
static
nmi
and
dynamic
in
mi,
so
we
have
made
a
clarification
on
that
on
different
terms.
So
aesthetic
aesthetic
in
mind
means
environment
is
independent
of
the
network
state.
Let's
say
that
we
want
to
measure
the
network
delay
of
the
packet
and
the
the
the
ibncs
will
continuously
sampling
whenever
the
network
behaves,
and
but
in
contrast,
the
dynamic
means
the
measurement
is
corresponding
to
network
state.
For
example,
we
want
to
measure
the
the
network
delay
of
the
bidding
time.
J
So
the
last
one
last
question
is
about
it:
will
it
be
about
the
the
policy
if
it
will
be
changeable?
So,
like
the
question
two,
so
our
in
our
my
system,
they,
the
the
policy
execution,
will
be
on
closed
loop
and
it
will
be
assessed
to
tell
the
policy
module
to
modify
the
different
policies
so
next
slide.
Please.
J
We
also
made
some
add
some
relative
references
to
the
draft
and
also
we
have
modified
the
the
figures
and
some
improper
writing
problems,
and
all
of
these
has
been
updated
in
the
drop
so
next
time,
please
so
for
the
next
types
of
the
draft.
We
we
still
want
to
say
that
these
nmi
drafts
can
be
seen
as
one
of
the
ibm
use
cases.
J
So,
if
you
have,
we
are
very
welcome
that
if
you
have
any
good
ideas
on
our
draft
or
if,
if
you
have
another
separate,
ibm
use
cases,
we
can
we
are
open
and
help
and
very
welcome
to
merge
them
into
you
know
in
into
a
total
into
a
into
one
single
ibm
use
case
draft.
So
we
we're
looking
forward
to
your
comments
and
suggestions
and
thank
you
guys,
yeah.
A
Thanks
a
lot
kian
for
sticking
to
the
time
and
also
the
consideration
that
it
gets
quite
late
in
your
in
your
area.
So
thanks
everyone.
We
are
reaching
the
end
of
the
meeting
time
today.
I
think
we
covered
a
lot
of
topics.
A
I
really
encourage
invite
everyone
to
continue
commenting
after
the
meeting
if
you
meet
each
other,
but
also
using
the
mailing
list
or
contact
the
different
participants
of
the
research
group,
and
this
is
also
very
important
to
continue
the
activity
in
between
meetings.
So
thanks
a
lot
everyone
and
enjoy
the
meeting,
and
thanks
for
our
remote
presenters.