►
From YouTube: IETF-COINRG-20220928-1400
Description
COINRG meeting session at IETF
2022/09/28 1400
https://datatracker.ietf.org/meeting//proceedings/
A
A
A
You
want
to
we're
going
to
give
everybody
about
two
more
minutes
and
then
we'll
start.
A
A
Okay,
I
think
we
can
start
since
we
have
a
pretty
pretty
full
agenda.
This
is
Computing
in
the
network.
This
is
our
interim
meeting
and
Jeff
is
on.
Eve
has
been
ailing,
I
would
say,
I
hope
she
can
join
us
later,
but
she
sends
her
her
best.
A
So
this
is
the
note
well
which
everybody
knows
very
well.
This
is
the
policies
you
know
the
essentially
I
think
what's
important
is
the
anti-harassment
I
think
for
me
and
also
for
a
lot
of
people.
It's
the
question
of
patents,
so
I
think
this
is
also
very
very
important
if
you
have
some
information
in
the
presentations
that
you
do
today
that
you
think
is
is
covered
by
patents.
Please
make
sure
that
they're
disclosed
I
will
not
I'm
sure
I'm
not
going
to
have
any
information
here.
That's
not
there,
but
I.
A
We're
not
there
to
do
standards,
and
you
will
see
today
that
we
have
a
large
amount
of
most
of
the
agenda,
is
on
research
issues
and
projects
and
new
ideas
and
I
think
this
is
what
we
want
to
encourage
and
and
continue
encouraging
and
I
know
Colin
likes
when
I
say
this,
but
we
also
invited
today
a
lot
of
people
who
presented
in
other
conferences,
and
we
were
happy
that
they
agreed
to
to
share
their
their
work
with
us.
A
So
I
think
it
always
is
good
to
expand
the
you
know
the
collaborations
and
the
participation
and
I
think
in
our
case,
we're
very,
very
lucky
that
we
are
very
much
in
a
rapidly
expanding,
essentially
field
and
that
we
are
very
lucky
to
be
there
and
you
want
to
take
over
Jeff.
C
Okay,
hello,
everyone.
So
today
we
will
start
from
two
papers:
first,
one
yes,
child
and
it
creates
proposing
a
new
programmable
Hardware
like
Tofino,
but
it's
just
more
similar
like
the
cheapest
in
the
loudest
in
the
internet.
So
that
can
be
very
interesting
and
the
next.
The
second
paper
is
and
distribute
that
coordination
or
in
network
computation.
C
Actually
that
was
introduced
to
work
in
asean
kotlin's
last
week,
I
believe
and
that's
quite
relevant
to
our
group,
and
then
the
following
three
presentations
was
proposed
to
work
proposed
to
last
ITF
meeting,
but
we
didn't
have
enough
time
there
at
the
same
time,
so
they
will
be
presented
today,
so
Doug
chosen
or
I.
They
will
give
us
an
update
for
the
use
case.
C
Jobs
and
early
early
will
introduce
a
new
app
just
about
distributed
and
learning
architecture,
and
you
think,
though,
will
introduce
their
some
use
cases
for
data
population
in
network
and
our
last
presentation
we
have
from
Stefano.
He
presented
the
EI
EIP
in
one
of
our
previous
meetings
like
here
today.
He
will
give
us
some
updates
EIP
in
context
of
also
machine
learning.
C
A
Yeah
we
can
go
to
the
next
one
okay.
So
if
you
hear
it's
because
you
know
what
that
meat
gecko
is
I
asked
for
a
a
help
on
on
the
minutes,
I
I
may
get
on
on
the
on
the
tool
myself
later,
when
we're
done
with
this.
Obviously
we
have
our
our
mailing
list
and
we
have
all
the
material
today
online.
A
We
have
again
we
have
two
documents
that
need
updates,
but
there
are
G
documents
and
I'm
sure
that
there's
going
to
be
maybe
discussion
of
sending
them
forward
in
the
approval
process
after
they
need
their
update.
We
have
a
new
draft
today,
which
is
this
distributed
learning,
and
then
we
have
this
other
ton
of
other
drafts.
That
I
think
we
really
need
to
address
in
terms
of
of
making
them
either.
A
You
know
a
lot
of
them
are
expired
anyway,
so
either
we
keep
them
expired
or
we
move
forward
with
them
and
we
can
discuss
that
online
is
not
necessary
to
was
there
a
question.
I
saw
somebody
maybe
asking
questions,
no
okay
and
then
our
Milestones
actually
we're
we're
late
on
on
two
Milestones,
which
is
which
was
I.
A
Think
the
the
covert
has
hit
us
in
the
way
I
think
I
think
the
scope
is,
is
evolving
so
much
that
I
think
the
three
of
us
agree
that
it's
really
hard
to
to
pinpoint
and
our
Milestone
review
we're
late
and
we
plan
to.
A
Probably
we
should
do
that
more
online
to
have
more
people,
but
that
actually
is
something
that,
upon
my
really
to-do
list
yesterday,
when
I
get
out
of
the
start
of
the
semester
here
and
three
proposals
I'm
involved
with,
but
it's
really
under
to
do
and
Jeff
even
I
are
going
to
cuddle
on
this
and
propose
something
else
and
actually
propose
a
new
set
of
Milestone.
That
really
really
reflect.
Like
I,
said
the
the
dynamic
nature
of
this
of
this
field
that
we're
in
you
want
to
add
something.
Jeff.
C
B
A
So
without
further
introduction,
I'm
going
to
end
my
show
and
I'm
going
to
start
not
unsharing.
My
screen
and
on
chair
is
where
I'll
unshare
my
screen
and
we
can
start
going
through
the
the
presentations
and
please
send
your
questions
on
on
the
chat
and
I'll
monitor
the
chat.
Also,
while
I
try
to
take
some
notes,
and
the
first
presentation
is
Trio.
B
Share
my
screen,
can
you
see
my
screen
or
is
it
it's
coming.
D
Okay,
so
this
is,
you
can
see
my
slides,
not.
B
B
Let
me
try
again
yep
do
we
have
to
approve
it.
C
B
A
It
but
obviously
that's
not
working.
E
D
So
hello,
everyone,
my
name-
is
Miriam
a
PhD
student
at
MIT
with
Professor
Maya
gobadi.
Today
is
my
great
pleasure
to
be
here
to
share
our
work
using
Trio
Juniper,
Networks,
programmable
chipset
for
emerging
in-network
applications.
This
is
a
joint
work
between
MIT
and
Juniper.
Networks
data.
Intensive
applications
such
as
machine
learning,
databases,
storage
and
data
analytics
are
the
foundation
of
today's
online
services
with
The
Graduate
slowdown
of
Moore's
Law
Hardware
accelerators
are
struggling
to
meet
the
performance
demands
of
emerging
Cloud
applications.
D
D
D
Let's
take
a
deeper
look
into
one
of
the
representative
applications:
machine
learning,
training
in
data
parallel
distributed
machine
learning.
The
Deep
neural
network
is
replicated
across
multiple
servers
and
each
server
will
process.
A
small
subset
of
the
entire
training
data
set
at
upper
iteration
servers
will
synchronize
their
model
parameters
by
exchanging
and
aggregating
their
gradients
to
ensure
convergence.
D
D
Our
results
show
that
the
presence
of
strugglers
imposes
a
practical
deployment
challenge
with
in-network
aggregation
for
distributed
machine
learning.
The
strugglers
are
quite
common
in
shared
clusters,
hosting
several
jobs,
where
different
servers
experience,
uncorrelated
performance
data
due
to
causes
like
congestion,
load
imbalance
or
garbage
collection.
D
D
The
reason
is
enabling
efficient
in-network
structure
mitigation.
Intoofino
switches
is
very
challenging
to
handle
the
struggler
problem
efficiently
inside
the
network.
The
switch
needs
to
perform
efficient
timer-based
operations
to
mitigate
strugglers,
such
as
checking
whether
a
struggling
event
has
occurred
or
not
periodically
and
sending
notification
packet.
This
is
quite
challenging
because
it
requires
weapon
interaction
between
the
switch
data
plane
and
the
control
plane.
D
B
D
D
Yeah
yeah,
nice,
okay,
so
in
this
talk,
I
will
describe
true
ml.
Our
proposed
system
for
efficient
in
network
struggler
mitigation
in
particular
we
can
achieve
1.8
times
faster
training
time
for
machine
learning
jobs
by
leveraging
Juniper
Network's
programmable
chipset
in
the
next
part
of
my
talk,
I
will
first
give
an
overview
of
juniper
Network's
chipset.
Then
I
will
discuss
true
ml.
Our
proposed
in
network
struggler
mitigation
for
Distributing,
machine
learning,
training
and
finally,
I
will
talk
about
our
evaluation
result.
D
D
True
is
a
programmable
chipset
used
in
Juniper,
Networks,
routers
and
switches
for
over
a
decade
shows
architecture
is
fundamentally
different
from
that
of
Tofino.
True
has
threat-based
architecture,
and
the
central
packet
processing
element
is
called
packet
forwarding
engine
for
an
incoming
packet
to
trio-based
switch
after
it
enters
Ingress
packet
forwarding
engine.
It
will
be
processed
by
one
of
the
available
threats.
D
Then
the
packet
will
be
sent
to
the
egress
packet
forwarding
engine
and
will
be
processed
by
another
available
thread.
After
the
process
is
completed,
the
packet
will
be
sent
out
when
multiple
packets
arrive.
These
packets
are
processed
independently,
using
thousands
of
parallel
threads
on
Trio
different
packets
do
not
necessarily
flow
through
the
same
physical
path
on
the
chip.
As
a
result,
Trio
gracefully
handles
non-homogeneous
packet
processing
rate,
whereas
intofino
only
line
rate
processing
is
supported
with
the
true
architecture
in
mind.
D
In
the
next
part,
I
will
talk
about
Trio
ml,
our
proposed
in
network
structural
mitigation
for
Distributing
machine
learning,
training
to
build
an
efficient,
in-network
struggler
mitigation
Technique.
We
need
a
network
structure
detection
which
enables
the
switch
to
efficiently
detect
struggler
events.
D
We
also
need
a
network
struggler
recovery
which
enables
the
switch
to
gracefully
serve
the
job
without
waiting
for
strugglers
in
network
struggler.
Detection
requires
timer-based
operations
based
on
user-defined,
straggler,
timeout
and
in
network
struggle.
Recovery
requires
a
lightweight
mechanism
in
the
switch
to
proceed.
The
computation
our
system
qml
addresses
both
challenges
using
trios
threads.
D
As
a
reminder
back
to
our
running
example,
to
perform
in-network
aggregation
for
machine
learning
models,
the
servers
will
send
the
model
gradients
to
the
switch
and
the
switch
will
aggregate
the
gradient
result
and
send
back
the
result.
Qml
creates
a
new
gradient
record
when
it
receives
a
new
packet
from
the
servers
in
normal
cases
without
any
struggler.
Once
Trio
ml
receives
packets
from
all
the
servers
and
completes
the
computation,
it
will
generate
aggregated
results
and
send
back
to
all
the
servers
to
build
efficient,
in-network
structure
detection.
D
D
However,
the
problem
with
the
naive
approach
is
that
it
couples
the
start
of
timeout
threats
with
arrival
of
new
packets,
so
the
switch
we
need
to
create
one
thread
per
packet.
As
a
result,
this
approach
is
not
scalable
for
large
machine
learning.
Models
about
the
number
of
threads
needed.
True
ml's
approach
is
to
decouple
the
start
of
timeout
threats
from
packet
arrivals
in
our
design
of
qml.
We
divide
the
total
gradient
records
into
n
groups,
and
each
group
is
scanned
by
one
thread.
We
also
add
a
specific,
recently
referenced
deck
to
each
gradient
record.
D
D
after
another
Delta
timeout
interval
qml
launches
another
unthread
to
scan
the
gradient
records
again,
if
recently
referenced
flag
is
one,
then
the
gradient
records
time
out
and
the
struggler
is
detected
with
this
approach.
Qml
guarantees
the
stronger
detection
within
Delta
to
two
Delta
timeout
window.
D
True
ml,
achieves
a
network
struggler
recovery
by
sending
partial
aggregation
results
to
all
the
servers
prior
research
shows.
Partial
aggregation
can
achieve
comparable
convergence
performance.
In
this
way,
non-structural
servers
continue
to
make
progress
without
waiting
for
a
struggler
for
a
very
long
time
and
also
show
ml
keeps
updating
the
struggler
to
ensure
the
machine.
Learning
model
is
consistent
among
all
the
servers.
D
We
compare
our
solution
to
ml
to
Tofino
based
in
network
aggregating
solution
called
switch
ml
in
an
ideal
environment
where
no
strugglers
exist
to
evaluate
the
training
performance
when
struggler
exists
in
the
cluster.
We
followed
prior
work
on
structural
generation
pattern,
namely
we
set
three
possible
delay
points
per
iteration
at
each
delay.
Point
one
Server
slows
down
with
a
given
structural
probability
p
and
the
delay
time
will
be
uniformly
randomly
chosen
between
half
to
two
times
of
the
typical
iteration
time.
D
The
Blue
Line
shows
the
iteration
time
for
qml,
and
the
green
line
shows
the
iteration
time
for
the
ideal
case
with
no
strugglers.
True
ml
is
able
to
maintain
the
training
iteration
time
close
to
the
ideal
case,
because
we
have
the
in-network
structure
mitigation
to
mitigate
the
effect
of
stragglers,
the
struggling
probability
P
equals
to
16
percent.
True
ml
achieves
1.8
times
speed
up
compared
to
3
Channel.
D
D
D
To
conclude,
strugglers
impose
a
deployment
challenge
for
a
network
computing
Solutions.
In
this
case,
we
propose
true
ml
a
noble
in-network
structural
mitigation
system
using
Juniper
Network's
threat-based
programmable
chipset,
ensure
ml
outperforms,
today's
in-network
aggregation
Solution
by
up
to
1.8
times.
That's
off
my
presentation.
Thank
you.
So
much
for
listening
and
I'm
happy
to
take
any
questions.
C
So
Mina
I
have
two
questions,
maybe
also
first
one
is
you
mentioned
the
irritation
time,
so
it
seems
It's
a
several
hundred
milliseconds
so
is
that
is
that
a
realistic
assumption
or
because
a
typical
Network
delay
is
quite
maybe
when
several
micro?
Second,
so
why
this
kind
of
assumption
is
a
realistic
in
the
data
center.
B
D
Yeah,
so
your
first
question
is
about
the
training
iteration
time
like
it's
in
hundreds
of
milliseconds.
So
to
answer
this
question:
basically,
we
perform
this
training.
Experiment
on
real
world
has
values
in
a
real
world
model.
D
This
is
the
result
we
get
from
The
Real
World
model
and
also
we
compare
our
results
to
the
typical
machine,
learning,
training
work
and
we
found
that
the
result
is
quite
similar.
So
it's
not
an
assumption,
but
it's
the
result
from
the
real
experiments.
C
So
can
you
go
back
to
previous
slide
and
I
noticed?
You
have
a
set
in
your
setup?
Yes,
so
you
introduce
some
staggers
right.
So
that
way
is
the
time
scale
of
that
stagger
is
it
is
iteration
time.
D
D
It
is
a
realistic.
It
is
a
realistic
assumption,
because
the
typical
iteration
time
is
in
the
next
slide
and
basically,
we
just
choose
the
instructor
account
to
be
half
to
two
times
of
the
training
iteration
time.
This
is
based
on
prior
machine
learning
work
and
also
so
you
think
it's
too
large
or
too.
D
Be
caused
by
many
reasons:
it's
not
simply
about
Network.
There
are
also
something
can
happen
on
the
struggler
operating
system,
such
as
the
strawberries
busy
doing
other
tasks
or
the
driver
is
doing
some
garbage
collection
tasks
on
the
server.
So
these
are
all
the
reasons
that
can
cause
the
struggler.
It's
not
simply
about
the
network
delay.
C
D
What
do
you
mean
by
background?
You
mean
other
tasks
are
happening
at
the
same
time,
oh
yeah.
So
in
our
experiment
we
are
not
having
another
task
happening
at
the
same
time,
but
in
realistic
setting.
It
is
possible
that
multiple
tasks
are
running
at
the
same
time,
and
this
is
when
our
structural
mitigation
effect
can
benefit
the
real
system.
C
Okay,
so
I
have
no
further
question.
Thank
you
and
one
country
in
the
the
chat
channel,
so
can
you
see
it
or
I
can
delete
it,
which
improvements
are
related
to
the
specific
specificity
of
the
chip
and
which
are
generic.
D
So
for
the
Improvement
we
are
getting
from
the
struggler
mitigation.
It
is
quite
specific
for
our
approach
of
mitigating
the
structural
effect.
So,
as
you
can
see
in
this
evaluation
result,
the
red
line
is
the
in-network
aggregation
for
using
the
Tofino
switch,
and
the
blue
line
shows
the
iteration
time
based
on
our
chipset.
Our
main
benefit
comes
from
our
ability
to
mitigate
the
struggler
inside
the
cluster,
whereas
in
other
Solutions
the
tofino-based
solutions,
it
is
very
challenging
to
implement
such
mechanism,
so
the
improvements
are
basically
coming
from
the
struggling
mitigation
we
have
in
the
system.
A
And
so
thank
you
again
very,
very
much
for
for
this
presentation
and
we
can
move
to
the
next
one
yeah.
Thank
you.
Thank
you,
and
the
next
one
is.
A
Course
in
network
computing
and
I
will
let
the
authors
present
who's
presenting.
What's
us
Rose
presenting
so
I
will
let
us
presented
thank.
F
F
Perfect,
thank
you
so
good
evening,
everyone,
a
PhD
student
at
Robert,
Bush
game.
We
have
with
Technical
University
of
Munich,
and
my
research
topic
is
currently
focused
towards
orchestration
solutions
for
ICN,
based
in
network
compute
systems.
Today,
I'm
going
to
present
Daiso,
which
is
a
distributed
coordination
solution
for
orchestrating
in-network
computations.
F
F
One
first
needed
to
establish
a
channel
to
the
node,
providing
this
information,
so
information
Centric,
networking
architectures
such
as
name
data
networking,
shifts
this
Focus
From
The
Host,
providing
the
data
to
the
data
itself
and
hence
the
name
information
Centric
networking
this
enabled
end
users
to
not
only
address
and
access
data
directly
using
name
based
access
mechanism,
but
also
resulted
in
a
loose
coupling
of
data
to
the
host.
Other
features
of
ICN
include
inherent
multicasting
capabilities
by
interest,
aggregation
in
network
caching,
flexible
hop
routing
and
so
on.
F
So
information-centric
networking
therefore
provides
access
to
data
instead
of
the
host
now
name
function.
Networking
further
extends
name
data,
networking
or
ndn
by
enabling
end
users
to
request
not
just
static
data,
but
dynamically
computed
results.
So
consumers
request
for
execution
results
with
the
help
of
defined
workflows
that
help
to
structure
the
request
name.
Nfn
uses
this
underlying
NFL
Indians
forwarding
principles
along
with
resolution
strategies
which
resolve
the
function
request
and
help
forwarding
it
to
the
suitable
execution
nodes.
F
The
Default
Resolution
strategy
in
nfn
is
find
or
execute
the
nodes,
identify
the
function
and
data
objects
from
the
requests
and
decide
to
forward
the
request.
If
the
node
has
neither
data
nor
function,
so
this
can
be
Illustrated
with
the
help
of
the
figure
here
where
nfn
Node
1
forwards
the
interest
i1,
because
it
has
neither
the
function
nor
the
data
for
executing.
F
If,
on
the
other
hand,
if
there
are
few
missing
components,
the
node
initiates
a
fetch
operation
to
fetch
these
missing
components.
For
instance,
the
NFL
node
2
here
initiates
a
data
fetch
operation
because
it
has
a
function
F1,
but
is
missing
the
content
of
data
D1
and
therefore
sensor
data
drag
operation.
So
if
it
is
missing
function,
then
it
sends
a
code,
Fetch
and
so
on.
The
Node
decides
to
execute
a
function
if
and
once
all
the
required
components
for
computing.
F
F
So
although
this
decentralized
significant
resolution
engine
performs
per
packet
decision
which
results
in
timely
resolution
of
requests,
there
are
still
some
limitations.
For
instance,
the
structure
of
the
workflow
defines
the
path
taken
by
the
interest
or
the
request.
So
the
consumer
who
defines
these
workflows
is
often
unaware
of
the
characteristics
of
data
or
function
while
creating
these
requests,
and
this
could
result
in
a
workflow
taking
a
different
path
that
is
less
efficient
when
compared
to
the
path
it
would
have
taken
had
the
same
request
been
structured
a
little
differently.
F
Additionally,
in
order
to
make
prompt
decisions,
the
resolution
engine
restricts
the
amount
of
information
it
processes
to
the
local
knowledge
of
the
node
and
therefore
does
not
consider
the
topology
or
the
neighboring
node
characteristics
while
making
these
forwarding
decisions.
So
such
this
could
lead
to
some
optimal
forwarding
decisions,
especially
while
considering
decomposition
of
functions.
F
Daiso
is
a
distributed
coordination
solution
which
Targets
this
enhancement
of
local
knowledge,
scope,
used
in
decision
making
and
improved
the
sub-optimal
resolution
of
nfn,
so
dicer
retains
the
nfn
folder
at
all
compute
nodes.
Additionally,
dicer
establishes
synchronization
groups
where
all
the
nodes
in
a
group
exchange
their
state
information
with
each
other.
This
added
knowledge
of
the
surrounding
notes
would
then
help
improving
the
resolution
decisions
taken
by
nfn.
F
The
figure
shown
in
this
slide
comprises
of
three
nodes:
NFL
nodes,
a
b
and
c,
and
the
blue
circle
around
the
node
shows
the
knowledge
scope
of
the
node
using
which
the
nodes
which
you
can
see
that
does
not
extend
beyond
the
node.
The
Q.
Next
to
the
nodes,
V
and
C
refer
to
the
execution
queue
that
the
node
is
presently
handling.
F
When
a
compute
request
reaches
node
B,
the
node
decides
to
forward
the
interest
Upstream
towards
node
C,
due
to
lack
of
data
and
function,
even
though
it
has
a
compute
resources
that
can
be
used
for
execution.
Node
C,
on
the
other
hand,
despite
holding
the
necessary
components,
does
not
have
the
resources
to
spare
for
the
execution
of
this
interest.
F
Hence
the
interest
is
added
to
the
execution
queue
resulting
in
further
delay
in
the
response
with
Daiso
we
now
Group
B
and
nodes,
B
and
C
together,
and
the
nodes
periodically
exchange
information
about
the
estate
with
each
other.
In
this
case,
node
B
is
made
aware
of
the
execution
status
and
compute
resource
availability
of
node
C.
The
extended
knowledge
scope
around
node,
B
and
C
is
represented
using
this
brown
shaded
region
around
the
nodes.
F
Once
the
nodes
synchronize,
the
nodes
can
coordinate
with
each
other
enforcing
some
decisions
that
alter
the
resolution,
decisions
taken
by
their
respective
nfn
resolution
engines.
So
in
the
third
figure
here,
node
B
realizes
that
node
C
is
overloaded
with
interest
and
performs
some
fetching
operation
of
functional
data.
This
enables
the
node
B
to
now
participate
and
share
some
of
the
load
at
node
C
and
reduce
the
latency
in
responding
to
the
computed
result
to
the
consumers
question.
F
There
are
still
some
challenges
and
open
questions
that
needs
to
be
answered
in
order
to
realize
daiser.
So
how
do
we
group
notes
when
and
what
information
do
the
notes
synchronize
with
the
members
of
the
group?
What
kind
of
coordination
decisions
should
the
notes
take
with
the
collected
information,
and
how
do
we
handle
the
network
overhead
that
our
raises
out
of
such
periodic
synchronizations?
In
order
to
answer
these
questions,
we
split
the
concept
of
Tyson
into
four
phases,
namely
neighbor
node,
Discovery,
synchronization
group
formation,
synchronization
and
coordination
in
the
upcoming
slides.
F
The
first
phase
is
a
neighbor
node
Discovery
phase.
This
is
dedicated
for
the
nodes
to
understand
the
network
topology
and
the
static
characteristics
of
the
surrounding
nodes,
and
in
order
to
accomplish
this,
we
take
inspiration
from
the
name
link
state
routing
protocol,
the
nlsr
protocol
for
sending
periodic
hello
messages
from
every
node.
These
periodic
Discovery
messages
are
broadcasted
in
periodic
short
bursts
and
in
order
to
curb
or
restrict
the
flooding
of
network
links
with
such
Discovery
messages,
we
use
interest
hop
limit
to
a
certain
range
within
which
the
nodes
need
to
be
discovered.
F
Every
node
receiving
the
discovery
messages
would
then
respond
with
the
static
configuration
information
such
as
compute
configuration,
Network,
link,
capacity,
hop
distance
path,
latency
and
so
on.
The
table
in
this
slide
is
an
example
of
information
data
procured
at
node,
N1
I'll
at
Node
1
about
the
other
nodes,
and
once
the
nodes
are
discovered,
the
periodic
phase
is
now
used
to
identify
nodes
entering
or
leaving
the
discovery,
scope.
F
The
next
phase
is
a
synchronization
group
formation
phase.
We
Define
synchronization
group
denoted
by
SG
by
three
parameters,
namely
members
of
the
group
VI,
the
synchronization
interval
or
frequency
TI,
and
the
information
synchronized,
which
is
pi
one,
could
theoretically
group
every
node
in
the
world
into
a
single
synchronization
group
and
synchronize
it
on
extremely
fine
details
at
very
short
intervals.
However,
this
is
not
practical
due
to
the
network
overhead
and
sharing
huge
volume
of
data
and
the
processing
overhead
for
making
coordination
decisions.
F
At
the
same
time,
restricting
the
group
size
and
the
information
may
not
help
in
bringing
quantifiable
improvement
over
nfn.
Hence
we
are
employing
multiple
synchronization
groups
of
waiting,
size,
information
and
frequency.
The
figure
here
shows
three
different
synchronization
groups:
sg1
sg2
and
SG3.
Sj1
is
a
group
of
nodes.
One
hop
away,
sg2
groups
nodes
that
are
up
to
two
hops
away:
nsg3
groups
notes
that
are
three
hops
away,
and
so
on
the
size
of
the
group,
sg1
is
smaller
than
sg2
and
sg2
is
smaller
than
SG3,
meaning.
F
F
So
as
the
size
of
the
group
increases
or
as
a
distance
from
the
node
increases,
the
amount
of
information
shared
is
reduced
to
just
the
crucial
information.
The
Crux
of
the
information
and
the
synchronization
interval
is
also
reduced.
This
helps
in
gaining
information
across
a
large
scope,
while
maintaining
the
network
overhead
under
control.
F
Okay,
this
slide
shows
the
sequence
of
Interest
data
exchanges
that
happens
in
the
group
formation
phase.
The
synchronization
group
formation
proposal
interest
is
initiated
by
every
node,
targeting
the
discovered
nodes
and
inviting
them
to
join
the
group
The
nodes
receiving
the
interest
would
then
respond
with
the
follow-up
interest
requesting
for
other
information
pertaining
to
the
sync
group,
such
as
the
mem
other
members
of
the
group,
the
scope
of
synchronization,
periodicity
of
synchronization
and
so
on.
F
If
the
nodes
do
not
detect
any
redundancy
with
already
established
groups,
then
the
nodes
respond
with
an
acknowledgment
for
joining
the
group
and
on
discovering
new
notes
to
be
added
to
an
existing
group
or
removed
from
an
existing
group.
All
the
other
group
members
are
notified,
as
well
as
part
of
the
group
formation
phase.
F
At
the
end
of
group
formation,
where
all
the
nodes
have
been
created
at
the
nodes,
all
the
groups
have
been
created
at
the
nodes.
Each
node
performs
group
optimization
to
detect
and
avoid
any
relevant
synchronization
happening
within
these
nodes.
An
example
of
group,
optimization
scenario
is
shown
in
this
slide
with
a
Phi
node
Network,
so
the
Node
1
forms
two
groups,
one
group
with
nodes,
one
hop
away,
that
is
with
node
two
and
node
three
and
another
group
with
nodes,
two
hops
away
with
node,
4
and
note
5..
F
Similarly,
with
respect
to
Note
4,
it
also
forms
two
groups,
one
with
nodes
at
one
hop
distance
with
node,
two
three
and
five
and
another
with
nodes
at
two
hop
distance.
That
is
with
just
node
one
here.
The
two
hop
group
at
node
4
is
redundant,
as
all
members
of
the
group
are
already
a
subset
of
two
hop
group
from
Node
1
and
therefore
is
completely
removed.
F
F
So
such
group,
optimization
is
mainly
focused
on
trying
to
avoid
any
additional
Network
overhead
from
synchronization
occurring
from
redundancy
and
therefore
it
detects
such
scenarios.
F
The
next
phase
is
synchronization
phase.
We
adopt
the
application
data
set.
Synchronization
protocol
called
the
state
Vector
synchronization,
which
is
SVS,
which
was
formally
developed
for
synchronizing
synchronization
and
distributed
applications
with
several
participants.
An
example
of
one
such
application
is
the
Indian
chat
application.
The
SVS
data
model
is
based
on
Vector
clocks.
In
this
slide,
we
show
SVS
in
action
for
the
network
shown
here
with
three
nodes:
user,
a
b
and
c.
The
change
updates
at
each
node
is
mapped
to
sequence
number,
and
this
sequence
number
is
published
as
a
power
of
change
notification.
F
F
This
is
done
with
the
help
of
a
change
notification,
which
is
a
sync
interest
initiated
by
user.
A
the
structure
comprises
of
the
group
prefix,
followed
by
the
state
Vector
representing
user
A's
knowledge
of
states
of
all
the
users
in
the
group.
The
sync
interest
is
now
multicasted
to
all
other
members
of
the
group.
Each
group
also
holds
a
local
data
store
which
records
the
history
of
changes
that
each
member
of
the
group.
F
The
slide
here
shows
the
data
store
at
user
B
and
C
by
comparing
the
state
Vector
in
the
change
notification
with
the
local
state
in
the
data
store
they
detect.
If
there
are
any
updates
in
the
other
members
in
the
group.
So
now
they
detect
that
there
is
an
update
in
user
a
that
they
are
not
aware
of.
F
Therefore,
in
order
to
fetch
each
change
update,
it
costs
approximately
three
message:
exchanges,
one
sync
interest:
one
fetch
interest
and
one
fetch
data.
The
synchronization
phase
can
be
invoked
either
periodically
where
the
nodes
publish
a
sync
interest
at
fixed
time
periods,
or
it
can
be
invoked
in
an
even
triggered
fashion,
where
each
change
is
an
event
triggering
synchronization.
F
In
daiser,
we
Implement
two
types
of
synchronization
groups
grouped
based
on
Hop
distance,
the
one
hop
synchronization
group
at
each
node
groups,
nodes
with
one
hop
away
while
three
hop
nodes,
scope,
extends
to
three
hops.
The
node
within
one
hop
group,
Share,
Fine,
general
information
of
State,
such
as
nodes
resource
utilization
functions,
instantiated
at
a
node
functions,
requested
with
node
list
of
unresolved
function,
requests
the
current
execution
queue
and
so
on.
F
Since
the
three
Hub
groups
are
larger
in
size,
the
synchronization
information
is
aggregated
and
each
node
shares
a
cumulative
knowledge
of
itself
with
its
one
hop
group
which
we
refer
to
as
a
Zone.
That
is,
each
node
shares
the
cumulative
knowledge
of
the
network.
Zone
it
belongs
to
for
and
examples
of,
those
could
be.
The
average
resource
utilization
of
a
Zone,
the
most
unresolved
function
or
most
popular
function
at
a
Zone
and
so
on.
F
The
synchronization
interval
for
one
hop
periodic
synchronization
is
closer
in
order
to
enable
the
neighboring
nodes
to
react
quickly
to
the
changes
in
the
neighborhood.
While
the
three
hop
synchronization
is
invoked
at
larger
intervals,
so
that
the
neighboring
zone
is
prepared
to
handle
the
gradual
changes
in
the
network
across
a
widen
scope,.
F
After
synchronization,
the
coordination
decision
making
phase
begins
where
each
node
Aggregates
all
the
information
collected
from
synchronization.
With
the
information
the
node
enforces
some
changes
locally.
That
alters
the
forwarding
behavior
of
underlying
and
FN
resolution
engine.
Each
node
identifies
whether
it
has
the
necessary
compute
resource
available
for
executing
function
requests.
If
the
node
can
spare
some
compute
resources,
then
it
identifies
any
busy
neighboring
nodes
which
is
congested
with
requests
and
the
list
of
unresolved
functions
at
such
nodes.
F
This
list
is
then
sorted
in
descending
order
to
ensure
that
the
most
popular
unresolved
function
is
resolved
first
or
hand
instantiated
first
and
the
free
nodes
fetch
such
functions
and
instantiates
them
resulting
in
a
load
balancing
effect
by
sharing
some
of
the
compute
load
from
The
Busy
node.
On
the
other
hand,
if
the
node
is
facing
a
high
compute
load
already,
it
looks
for
any
idle
or
less
frequently
requested
functions
that
can
be
terminated.
Such
functions
are
forwarded
Upstream
at
the
cost
of
increased
Network
consumption
and
latency.
In
responding
to
these
requests.
F
Now
we
Implement
all
the
phases
of
the
iso
and
evaluate
them
using
ndn
Sim,
which
is
an
ns3
based
simulator.
We
implement
the
Inc
sum
module
which
simulates
the
behavior
of
NFL
resolution
engine
at
the
compute
nodes.
Daiser
is
an
application
running
alongside
nfn
taking
care
of
the
faces,
such
as
neighbor
node,
Discovery
group
formation,
optimization
synchronization
and
coordination.
The
network
setup
comprises
of
a
hierarchical
topology
of
compute
nodes
and
consumers.
The
simulation
parameters
that
we
use
is
presented
in
the
table
shown
in
the
slide.
F
Okay,
the
first
evaluation
metric
is
the
orchestration
map.
So
orchestration
map
shows
the
function
placement
at
each
node
in
the
network
topology.
We
also
implement
the
next
fit
decrease
in
heuristics,
which
sorts
the
function
in
descending
order
of
popularity
and
places
them
at
nodes
closest
to
consumers
in
progresses.
Upstream,
and
this
heuristic
does
not
replicate
instantiation
of
popular
functions
at
multiple
nodes.
F
Now
we
compare
the
orchestration
map
generated
from
dicer
and
nfn
against
that
that
generated
by
nfid
and
on
the
x-axis
we
have
the
increasing
number
of
compute
nodes
in
the
network
and
on
the
y-axis.
We
have
the
number
of
functions
placed
at
these
nodes
by
the
different
solutions.
With
anything
the
number
of
functions.
Instantiated
is
less
compared
to
that
of
the
iso
and
and
nfid
in
an
imbalanced
float
scenario.
F
So
by
imbalanced
load
scenario,
what
I
mean
is
that
in
a
network
setup
half
the
nodes
face
High
compute
requests,
while
remaining
half
of
the
compute
nodes
are
pretty
much
idle.
In
this
case,
the
nodes
which
do
not
observe
any
compute
load
in
nfn
remain
idle
and
those
nodes
with
high
compute
load
are
congested
with
growing
execution
queue.
F
On
the
other
hand,
with
daiser.
Since
the
nodes
are
more
aware
of
the
network
state
in
the
zone,
it
identifies
and
instantiates
more
popular
function,
bringing
up
a
load
sharing
effect
Additionally.
The
daiser
coordination
algorithm
also
ensures
replication
of
those
popular
functions
with
a
high
number
of
requests
that
cannot
be
handled
by
just
a
single
instance
of
the
function
running
at
a
node,
so
daiser
also
takes
care
of
replicating
popular
functions.
F
We
also
evaluate
the
reduction
in
completion
time
with
the
iso,
so
completion
time
is
defined
as
a
round
trip
time
from
the
compute
request,
initiation
of
the
consumer
until
the
corresponding
data
packet
with
the
result
reaches
the
consumer,
we
tune
the
imbalance,
Factor
denoting
the
load
distribution
at
the
compute
nodes
from
a
balanced
load
where
all
the
compute
load
compute
nodes
in
the
network
face
equal
compute
requests
to
an
imbalanced
load
which
is
ibf
equal
to
one
where
the
load
is
just
on
50
of
the
nodes
on
the
x-axis,
the
number
of
unique
functions
and
the
setup
is
increased
and
on
the
y-axis
we
have
the
completion
time.
F
We
observe
that
in
a
balanced
load,
Indian
already
performs
well
with
identifying
and
instantiating
the
needed
functions
of
the
compute
nodes,
but
in
an
imbalanced
load
situation
where
half
the
nodes
are
idle
and
other
half
of
the
nodes
are
extremely
busy.
Tyson
brings
down
the
completion
Time
by
utilizing
the
compute
capacity
at
the
idle
nodes.
F
The
next
evaluation
metric
in
is
the
network
overhead
of
using
dicer
and
its
effect
on
the
scalability
of
the
solution.
So
in
this
graph
on
the
x-axis,
we
have
the
network
compute
scale
denoted
by
the
number
of
compute
nodes
and
on
the
y-axis
we
have
the
generated
Network
traffic
fundraiser
synchronization.
We
employ
different
synchronization
intervals,
starting
from
two
seconds
up
to
50
seconds,
and
we
see
that
the
network
overhead
shows
a
quadratic
relationship
with
the
network
scale,
as
well
as
the
synchronization
interval
group.
F
Optimization
to
an
extent
already
helps
curve
this
overhead
to
a
certain
extent,
but
there
are
also
other
ways
to
reduce
the
control
traffic,
for
instance,
by
employing
even
triggered
synchronization
in
a
parcel
Network
or
one
could
also
employ
partial
synchronization
as
one
directional
synchronization,
where
nodes
with
high
centrality
degree
collects
information
from
surrounding
nodes
and
enforces
not
local,
but
zonal
decisions
and
the
nodes
which
are
not
well
connected
do
not
get
any
updates
from
the
the
other
nodes
and
this
kind
of
reduces
the
synchronization
overhead
even
further.
F
So
this
brings
to
the
conclusion
of
my
presentation
of
tyser
summarizing.
We
presented
the
Default
Resolution
strategy
of
nfn,
along
with
its
limitation,
following
followed
by
the
four
building
blocks
of
daiser
concept,
and
this
was
followed
by
a
brief
overview
of
the
evaluation,
metrics
and
performance
comparison
of
daiser
against
nfn
and,
as
part
of
future
work.
We
intend
to
enhance
the
cognition
algorithm
to
operate
on
the
objective
of
achieving
joint,
optimization
of
computer
network
resource
utilization,
as
well
as
minimization
of
completion
time.
F
F
Thank
you
and
I'll
be
happy
to
take
it.
Take
up
any
questions.
B
A
Anyway,
questions
can
be
sent
to
the
chat
or
be
sent
to
the
list
and
well
we're
on
time,
which
is
great.
So
next
one
is
Dirk
I.
Think
if
you
are
presenting
so
did
you
have
a
question
Dirk
or
you're,
just
presenting
okay
Dirk
is
presenting
and
it's
the
use
case
update.
Thank
you.
G
Hi,
sorry,
sorry,
if
it's
still
here
with
my
clicking
the
buttons
you
can
see
as
I
wanted
to
present
I
wasn't
entirely
sure
if
this
was
the
last
version
that
was
on
the
data
tracker
and
then
I
swapped,
the
presenter
icon,
I
did
a
double
play,
but
I
ended
up
doing
the
presentation
yeah.
This
is
about
the
use
cases
for
a
notebook,
Computing,
you'll
notice
and
I'll
come
to
this
later
on.
At
the
end,
this
is
an
expired
craft
that
you
I'm,
not
interested.
G
I
already
mentioned
that,
so
you
can
find
in
the
archive
at
the
moment.
Only
yes,
so
the
purpose
of
the
draft
is
is
to
go
after
you
know
what
coin
Charter
talks
about
in
scope,
number
two
research
and
use
case
driven
requirements:
analysis
benefits
of
the
type
of
networks
of
these
networks
remain
functionality.
That's
all
text
is
written
in
the
charter.
What
we've
done
so
far
with
the
draft
was
to
collect
use
cases
and
recently,
in
the
most
recent
iterations,
we
worked
more
on
structuring
them
the
the
draft.
G
If
you
follow
the
draft
in
the
past
a
little
bit,
it
had
an
organic
growth.
It's
a
it's:
it's
a
emerging
of
several
activities
into
a
single
collection
of
use
cases,
but
the
structuring
that
was
looking
more
into
providing
insights
into
those
benefits.
Research,
question
opportunities
for
coin:
to
go
really
more
on
what
the
charter
was
asking
for
as
as
a
goal
we
we
started
with
when
it
was
adopted.
G
Finally,
as
a
research
draft-
and
we
are
currently
at
version
two-
which
is
the
one
that
expired-
but
in
version
one-
we
already
started
recouping
the
actual
use
cases
as
a
result
of
the
adoption
we
have
now
four
major
groups,
which
are
the
four
main
sections
which
which
are
looking
at
new
coin
experiences,
new
coin
systems,
improving
existing
coin
capabilities
and
enabling
new
capabilities-
and
we
elaborate
in
these
sections
a
bit.
What
do
we
mean
by
that?
G
We
also
try
to
sharpen
and
tighten
up
the
taxonomy
I'll
come
to
that
in
the
next
slide,
and
also
prepared
and
start
the
analysis
on
research,
questions
and
requirements.
I'll
come
to
that
as
well.
So
this
was,
there
was
quite
a
significant
change
when
we
actually
went
to
the
the
the
the
the
adopted
draft
level.
So
this
is
the
current
structure
and
that's
the
same
for
V1
and
V2.
We
just
added
more
more
cases,
nothing
really
that
did
change.
G
This
is
the
use
case
recruiting
that
I
mentioned
so
the
main
sections
three
four,
five
six
are
the
four
groups
I
mentioned
before
we
have
a
I'm
yet
again
got
another
use
case
so
Xavier
joined.
The
draft
was
a
new
use
case
in
Virtual
Network
programming.
That
was
another
one.
That's
the
latest
one
that
we
added.
We
also
had.
Is
it
up
there?
G
The
personalized
interacting
Performing
Arts
as
an
example
that
we
added
with
Miguel
and
David,
who
joined
I,
think
also
around
the
same
time,
so
my
HSA
joined
I
think
with
the
extended
reality.
So
that's
what
I
mean
it
was
kind
of
like
bringing
various
views
and
various
people
together
at
the
end
ended
up
on
the
on
the
author
list.
G
I
said
start
with
the
analysis
in
B2,
so
that's
the
latest
section
number
seven
and
we
also
try
to
in
the
meantime
to
partially
align
our
terminology
with
the
other
craft
that
that
my
children
mentioned
the
coaches
draft.
That
has
also
expired.
In
the
meantime,
the
terminology,
as
I
said
you
know,
we
introduced
language
and
try
to
also
adjust
all
of
the
use
cases
to
follow
the
same
language,
given
that
they've
all
that
they
have
had
grown
organically.
G
There
was
a
bit
of
a
language
mismatch
which
we
try
to
smoothen
out
a
little
bit,
so
we
introduced
these
type
of
terms
that
are
explained
in
the
draft
coin
program.
Coin
program
instance:
a
function
capability
Etc.
These
are
the
definitions
from
the
craft.
Excuse
me
and
the
green
ones
we
try
to
align
with
duck's
draft
and
we
also
introduced
a
few
new
ones.
The
coin
experience
the
current
capability
that
I
mentioned
was
in
the
recouping
of
the
end
of
equipping
off
of
the
use
cases
that
we
utilize.
G
Then
you
also
restructure
the
the
use
cases,
so
we
try
to
follow
the
same
kind
of
taxonomy
in
all
of
them
so
that
you
can
hopefully
read
them
in
a
very
similar
manner.
So
there's
a
link
to
the
category
in
the
description.
The
description
is
trying
in
a
relatively
closer
language,
to
outline
what
it
really
is
that
this
use
case
provides
as
an
experience
and
characterization.
G
What
are
existing
solutions
to
maybe
do
parts
of
these
use
cases
in
in
some
of
the
case
in
some
use
cases,
parts
or
almost
entirely
what
are
opportunities,
and
we
split
up
the
objectives
in
the
research
questions
in
in
one
of
the
latest
iterations,
and
then
we
also
added
requirements
for
the
account
capabilities
there,
and
this
is
said
all
try
to
pull
this
through.
G
Every
single
use
case
that
we
described
I
think
we've
done
a
decent
job,
at
least
to
follow
that
taxonomy.
But
there
are
still
cases
where
we
need
to
maybe
solution
it
a
little
bit
more.
We
started
as
I
mentioned
in
V2,
with
the
attempt
of
an
analysis
so
for
that
we
used
I
and
we
had
a
couple
of
separate
meetings.
G
We
were
in
the
authors
to
think
about
how
can
we
structure
research
questions
along
various
categories,
and
some
of
you
who
may
recall
material
are
presented
something
similar
in
the
other
entry
we
had
losing
a
bit
of
track
of
time.
I
think
that
was
in
spring
this
year.
Right
when
we
had
the
other
interview
on
how
could
you
look
at
the
various
things
that
we're
talking
about?
G
So
you
know
you
can
see
Visions
for
coin
at
the
bottom,
enabling
technology
is
just
doing
Computing
framework
applicability
areas
and-
and
we
adapted
this
kind
of
way
of
looking
at
coin,
in
order
to
structure
research
questions
along
those
different
categories
right.
The
requirement
analysis
is
something
we
intended
to.
We
have
requirements
in
the
actual
use
cases,
but
we
haven't
really
looked
at
them
coherently,
but
that's
the
part.
That's
still
missing,
that's
an
empty
section
at
the
moment.
G
So
the
next
steps
where
we
are
that
we
would
like
to
do
well
the
office
first
observation.
Coming
back
to
this,
the
draft
has
expired.
There
was
a
notice
recently
and
there
was
a
discussion
among
my
HSA
Ike
and
myself.
What
should
we
do
at
the
moment?
We
let
it
expire
in
order
to
present
this
to
the
working
group
in
order
to
come
to
the
questions
in
the
next
slide
right,
so
we
need,
we
would
need
to
resubmit
a
new
version.
That's
the
first
step.
G
There
is
work
in
the
draft
that
could
be
done
finishing
to
align
the
use
cases.
I
said
there
are
still
some
bits
and
pieces
and
Corners
where
we
need
to
smoothen
the
taxonomy
a
bit.
It's
a
huge
task
that
needs
to
be
done,
aligning
the
draft
with
the
terminology
that
we
try
to
introduce
and
also
obviously
also
agreeing
on
the
terminology
if
possible.
G
The
analysis
part,
you
know
condense
the
opportunities
we
just
question
all
this
kind
of
things,
but
it
brings
me
to
the
questions
in
the
first
place,
so
there's
work
that
could
be
done.
If
you
wanted
to
the
questions,
though,
I
have
to
to
to
to
the
community,
the
working
group
is
well
first,
do
we
want
to
collect
a
terminology,
that's
emerging
from
this
draft
and
also
trying
you
know
aligning
with
the
coaches
draft?
G
Where
do
we
want
to
collect?
That?
Is
that
a
good
place
to
leave
it
in
this
draft,
or
do
we
want
to
have
this
somewhere
separate?
That's
the
first
question
we
also
looked
into.
Should
the
analysis
really
be
part
of
this
document?
We
started
with
the
analysis
because
we
felt
that
there's
material
in
there
that
could
really
help
us
to
make
sense
of
the
discussion
or
do
we
want
to
maybe
finish
this
document
and
do
the
analysis
in
a
more
thorough
manner,
maybe
in
a
different
document
right.
G
We
could
keep
this
purely
at
the
use
case
level
and
and
leave
the
analysis
for
a
separate
document.
That's
another
question
to
decide
on,
but
I
think
the
most
crucial
one
really
is
given
that
this
draft
has
expired.
Do
we
want
to
see
this
work?
Continued
I
mean
the
drafts
are
available
they're
in
the
archives.
If
you
want
to
read
them,
but
do
we
want
to
have
the
work
continue?
Do
we
want
to
have
a
last
call
for
publication,
considering
the
editorial
questions
that
are
asked
around
the
analysis?
G
Do
we
want
to
continue
and
then
obviously
the
question
inevitably
is?
Are
there
any
other
contributors
that
would
help
us
pushing
the
work
forward
and
with
that
I
think
there's
a
question
for
my
HSA,
it's
one
of
the
co-authors
or
as
a
chair,
I,
don't
know.
Thank
you
very
much.
A
There's
more
as
a
chair
than
I
think
it's
relates
to
your
questions
and,
of
course
we
could
move
a
lot
of
that
to
the
list,
but
I
think
collecting
the
coronary
terminology
is
a
good
idea.
Maybe
we
want
it
would
be,
maybe
better
to
put
it
in
a
small
draft
and
making
it
on
its
own
in
a
lot
of
groups
have
ontology
drafts
and
our
and
rfcs
so
I
think
that
would
be
anyway.
What
I
think
and
others
can
disagree?
A
I
would
say
the
analysis
I
agree
with
you
that
it
maybe
should
be
out
and
I
think
it's
important
to
have
the
work
continued
and-
and
this
is
me
as
a
chair
now
it
I
I,
I
I,
know
I'm.
Also
an
I
co-authored
but
I
think
it's
it's
an
important
topic
because
of
how
Dynamic
the
the
the
field
is
and
the
the
use
case
that
are
in
there
have
like
some
kind
of
also
the
historical
nature.
You
know
we
started
this
a
very
long
time
ago
and
I
think
it.
A
It
evolves
with
the
with
the
field
and
I
think
would
have.
You
know
like
some
kind
of
a
good
overview
of
how
this
started
and
where
it's
going
so
yeah.
So
this
is
me,
as
the
chair
I,
think,
as
a
co-author,
obviously
I'm
I'm
open
to
continue
helping
with
this
but
yeah.
So
those
are
my
comments.
Any
other
people.
B
A
So,
thank
you.
Thank
you,
Dirk
and
thank
you.
Everybody
for
presenting
this
and
I
think
I'm
lost
in
the
agenda,
but.
A
The
next
presentation,
oh,
is
a
new
draft
from
jaoli
from
Beijing
University
of
post
and
communication,
and
it's
also
about
some
ml
related
work.
So
ciao
you
want
to
present.
B
A
There
should
be,
they
should
be
there.
B
H
H
H
And
this
is
my
motivation
with
the
development
development
of
5G
technology
and
the
popularization
of
iot
medicine
is
graduated
by
the
date,
this
period
of
the
mobile
Terminals
and
iot
device.
When
facing
the
training
requirements
of
artificial,
intelligent
models.
The
Edge,
compute
The
Edge
Computing
under
the
cloud
computing,
how
they
are
open
initial
comments,
so
distribute
model
training,
architecture
business.
Besides
on
it
Cloud
collaboration,
it
has
become
a
feasible
shame
for
artificial
age,
tolerance,
mode
print.
H
The
model
training
Supply,
as
shown
the
figure
eight
since
H,
since
each
layer
of
an
artificial
intelligence
model
have
a
Independence
input
and
outputs
a
model
can
be
studying
to
multiply
several
models
for
Independence
training,
where
the
training
layer
that
links
the
models
is
called
segmentation
layers.
The
this
mask
can
provide
the
besides
for
age,
age,
Cloud,
collaborative
collaborative
training,
so
the
on
the
training
Pro
cesses
have
the
the
entire
training
process
is
showing
you
figure
beyond
the
figure
saying
balance
as
it
seems
some
mistake.
H
So,
let's
add
your
no
stands
in
a
mode
of
training
request
to
the
cloud
nodes
after
the
cloud
nodes
received,
all
training
requests
from
the
I've
noticed
it
prepare
for
more
more
training,
which
is
divided
into
the
standardization,
and
the
model
did
main
nation
in
the
most
domination.
The
cloud
nodes.
H
That
means
a
model
architecture
according
to
the
training
tasks
on
the
standards
to
our
age
notes
and
in
order
to
reduce
the
amount
of
of
the
Computing
in
the
training
process,
decided
need
to
be
standard
reasons
before
training
the
standard
Edition
Master
Edition.
They
they
turned
by
the
cloud
node
of
the
preparation
of
is
complete
in
terms
of
monitoring
stage.
H
H
And
same
time
is
also
possible
to
this
means
the
size
of
the
data
point
of
the
segmentation
layer
on
design
reserve
fundwise
for
the
model
transmission
station
here
in
advance
after
the
age.
Eight
nodes
finish
is
2004
preparing
model
and
it
sends
the
segmentation
layer
of
the
Preparatory
model
to
the
log
to
the
cloud
node
after
receiving
the
signature
layer
of
the
mode
of
the
column
completed,
separate
screen,
training
of
model
and
design
affects
the
mode
weights.
H
So
far,
a
random
model
training
is
is
complete
and
after
each
thousand
of
training,
the
cloud
the
cloud
node
the
current
to
generate
a
global
globe
model
so
through
the
distribute
learning
such
as
the
on
the
from
the
rotate
learnings
and
the
age
node
continuing
to
train
them
to
train
according
to
the
color
mode
under
the
model,
Acres
meet
the
requirements.
H
And
there
is
some
smooth
simulation
requires,
a
strong.
Our
our
architecture
can
can
improve
the
curioso
models
and
thus
reduce
the
reduce
the
computing
pressure
of
its
nose
and
the
improves
the
query
of
Nano
service.
Thank
you.
A
What
do
you
want
to
do
with
this
graph?
Because
you
want
to
continue
it?
You
want
to
yeah
what
are
your
your
goals
with
the
draft
for
for
from
from
moving
on
now.
H
I
want
to
prove
foreign
Computing
balance.
I
want
to
come,
I
want
to
provide
the
compute
balance
most
for
for
model
training
in
in
networks,
because
in
yeah
in
order
piece,
the
training.
Besides
the
clothes,
the
clothes
Computing,
is
neither
larger
large
Bond
wires
and
the
high
neck
injury
consumptions
item.
Maybe
some
is
maybe
in
those
works
under
besides
each
Computing
is
a
it
doesn't,
have
have
much
computing
power
and
it
can't
it
can't
certifies
the
model
training
request
month.
I
want
to
find
the
balance.
A
When
when,
since
this
is
a
new
draft,
any
other
questions,
things
would
be
to
maybe
start
a
discussion
on
the
list
about
how
could
how
this
could
evolve
within
within
the
group
so
yeah.
So,
let's,
let's,
let's
do.
B
E
C
A
The
next
presentation
is
on
on
new
ideas.
This
first
one
is
for
data
operations
in
network
from
Huawei.
E
Okay,
okay,
thank
you,
hello.
Everyone
I'm
eating
from
Hawaii
I'm,
very
delighted
to
share
my
thoughts
about
our
solution.
Data
operation
in
network
and
today
maybe
I
just
pay
more
attention
to
discuss
some
tone.
E
Your
stress
and
I
will
introduce
introduce
what
scenarios
we
think
don't
Solutions
a
traditional
way
and
maybe
I
will
I
want
to
discuss
the
in
network
computer
in
another
adapter
from
the
network
particle,
and
maybe
we
want
to
talk
about
how
some
in
network
computing
work
can
work
in
a
real
nice
work,
light
Data,
Center,
okays
and
okay,
nice
page
as
a
ISO.
First,
maybe
I
will
talk
more
about
our
motivation.
E
We
know
that
the
recent
Recent
research
has
shown
that
the
network
device
undertake
some
Computing
tests
can
greatly
improve
the
overall
Network
and
the
application
performance
in
some
scenarios
like
we
talked
about
lots
of
first
presentation
we
talked
about
today,
so
we
think
that
the
door
research
should
pay
more
attention
to
some
scenarios,
while
the
data
operations
are
required
to
be
done
at
a
synchronized
node.
Well,
the
operation
is
simple
enough
to
be
done
at
the
at
the
line.
Speed.
That
means
the
wasting
the
networks,
the
network
device
build.
E
Okay,
it's
a
nice
page,
the
first.
The
first
scenario
is
about
artificial
intelligence
scenarios
and
within
interest
scenarios.
So
first
presentation
we
talked
about
a
lot
within
the
increased.
The
number
of
surveys
does
not
lead
to
a
Learner
in
linear
increase
in
a
service
performance,
and
we
find
many
ways
to
Solutions.
This
question
for
one
day
is
where
our
parameter
Center
and
for
another
way
is.
We
have
already
use
solution,
part
of
all
the
night
radios
or
some
Computing
in
the
network
solution.
E
We
know
that
the
way
to
do
some
aggregation
tests
and
we
know
that
the
switch
in
the
center
topology
and
the
switch
will
aggregations
message
from
the
from
the
distributed
notes
and
in
this
patients,
which
will
aggregation
the
information
from
the
form
machine
and
the
reason
that
this
aggregation
so
don't
think
that
the
aggregation
may
be
a
basic
and
simple,
simple
operation
like
way
kind
of
strategies,
operations
like
some
arbitration,
and
if
we
want
the
dancing
the
package
from
the
zero
machine
to
the
switch.
The
package
will
tell
the
switch.
E
The
switch
should
do
the
sum
operation
and
so
and
they
will
have
a
standard
practical
to
tell
the
how
the
machine
packages
is
in
Computing.
The
Computing
information
and
also
don't
think
likely.
You
know,
data
center
is:
has
the
data
center?
Has
a
more
public
network
topology
and
not
like
some?
Some,
some
simple
topology
only
have
one
computers
switch
under
the
dawn
within
the
way
should
have
a
solution
to
rooting.
E
For
example,
we
transcend
you
know
the
house
will
send
a
loud
message
to
the
net
loss
weight.
The
net
not
switch,
can
repeat
the
host
whether
they
can
get
this
lot.
Maybe
it's
is
a
basically
and
a
simple
operation,
so
we're
saying
so
don't
think
the
net
notes,
which
can
do
this
and
will
have
that
benefit
from
this
solution
and
in
the
dawn,
will
also
sent
away
through
the
support,
the
routing
solution
or
other
operation
and
Computing
information
package.
Wait.
E
Okay,
so
tonight
is
the
secret
sequence,
and
the
second
is
a
scenarios
that
the
package,
the
different
package,
should
have
showed
how
a
message
to
to
decide
which
message
is
early
to
reach
the
to
the
server
and
in
the
traditional
way
we
use
the
global
transaction
manager.
But
in
the
dawn
we
think
this
operation.
We
can't
say
that
it's
like
a
fashioned
at
operation
and
wasting.
We
can
get
benefits
and
the.
E
In
the
summaries
in
my
Southwestern,
the
door
Network
cannot
support
very
complete
computer
operation,
the
wasting
some
basic
spaces
application
return
of
abstract
to
some
basic
basic
and
simple
operation
and
the
way
since
this
operation,
we
shouldn't
can't
affect
the
forwarding
performance
of
the
data
plan,
because
if
we
advise
the
reporting
performance
within
this
will
be,
there
will
be
less
useful
and
within
who
our
don't.
Can
there
always
some
bottleneck
of
the
computer
operation
so
in
the
dance,
dance
Southwest?
E
And
we
should
have
a
general
mechanism
to
the
Oasis
question
and
we
should
repair
to
tell
the
switch
what
to
do
and
what's
the
package,
how
the
package
routing
to
the
right
switch?
E
Okay,
this
is
my
thought
about
the
Dawn
and
the
way
it
says
that
the
use
case
we
introduced
before
return
of
abstract
some
basic
operation,
like
some
the
compare
and
swap
on
our
surface
and
the
action.
E
Okay,
the
next
page-
and
this
is
wasting
that
so
don't
device
we
can
do
under
wait
and
consider
in
the
dawn
solution
at
first
waiting,
a
door
Network
can
root
the
Computing
package
to
the
right
Computing
device
within
maybe
the
door.
E
Network
have
a
lot
of
loan
device,
but
we
think
we
should
rooting
to
the
right
and
device
like
letters
aggregation
test
which
the
four
machines
should
negotiate
Asian
waste,
which
will
do
the
aggregation
test
before
so
maybe
with
synthesis
the
first
question
and
another
question:
we
think
that
adult
operation
should
tell
the
device
what
to
do
like
understand
like
they
are
a
lighter
face
and
eyes
and
other
maybe
like
some
basic
operation.
E
Okay,
that's
all
for
the
next
step.
We
will
find
a
solution,
maybe
about
the
data
plan,
and
we
want
to
find
a
general
way
to
send
a
Computing
information
in
the
networks,
and
maybe
we
will
consider
more
about
how
to
make
this.
Don't
worry
relative
work,
like
they
say,
light
Data,
Center
and
the
other
and
the
whole
password
to
join
us.
Thank
you.
A
Thank
you
very
much
for
this
interesting
presentation.
So
what
do
you
intend
to
do
about
this
work
in
this
group?
Do
you
want
to
write
a
document?
You
want
to
continue
doing
this
strictly,
as
as
a
research
initiative
that
you
report
on
to
the
group
and
that
eventually
you.
E
E
Oh
okay,
maybe
I
now
I,
maybe
I
will
plan
to
have
a
drafts
in
the
ITF
because,
as
so.
E
A
Well,
yeah,
but
you
know
they.
The
thing
will
be
for
you
to
find
the
right,
the
right
working
group,
and
obviously
this
is
not
something
that
we
do
in
this
group.
But
if
you
want
to
continue
at
least
keeping
us
informed
of
the
work,
that
would
be
nice
and
of
course
we
can.
You
know
you
can
do
the
work
also
in
the
ietf.
E
I
hope
others
to
join
us,
because
our
work
is
just
a
start.
C
About
the
name,
there's
some
discussion
in
the
terminology,
in
fact
this
terminology
issue,
so
you
think
you'll
talk
about
you
name
it
data
operation
in
network,
but
your
three
use
cases
seems
that
very
Computing,
more
Computing
than
data
operation,
for
example
the
compare
and
the
cas
where
I,
compare
and
and
as
well.
E
C
Those
are
typical
computational.
Sometimes
they
are
called
Atomic
computation
in
context
of
MPI
or
RDMA.
I
forgot
it,
but
that's
typical
computation.
So
that's
that's
more
computation
than
data
operation.
I
was
just
saying:
that's
a
quick
comment
for
the
name.
I
mean.
E
Yes,
yes
in
my
sauce,
and
so
don't
consider,
is
that
the
application
is
changed
very
fast
and
the
weight
can't
deal
I.
Will
communication
Case
by
case
is
not
it's
not
a
good
way
to
make
our
research
to
become
to
work
in
some
data
center,
so
maybe
I
think
of
strategies
operation,
maybe
a
good
way.
Thank
you.
B
Yeah
hi,
so
just
following
up
on
Jeffrey's
comment
and
with
no
hats,
I
I
was
a
little
surprised
that
the
operations
were
such
low
level,
given
the
the
more
high-level
use
cases-
and
it
wasn't
clear
if
this
was
a
a
computation
in
the
network,
signaling
scheme
or
an
active
networking
scheme
that
was
being
proposed
and
it'll
be
good
to
be
clear
on
the
distinction
on
where
this,
where
this
was
going.
I
Thank
you.
I
would
like
to
provide.
I
Yeah,
it
seems
there's
some
back
here
and
right.
The
data
operation
generally
in
my
view,
is
that
the
data
is
carried
in
the
payload,
so
we
provide
the
data,
we
provide
an
operations,
so
it's
we
want
the
switch
to
operate
on
the
data
following
the
packet
right.
So
that's
why
this
work
is
called
Data
operation.
I
The
data
is
provided
and
we
have
some
simple
instructions
and
let
the
switch
do
the
work
and
also
the
reason
why
this
all
the
operations
are
very
low
level
is
because
we
we
believe
that,
in
these
scenarios,
keeping
the
task
at
line
rate
is
very
important.
Therefore,
the
combinations
of
various
different
operations
or
Atomic
computations
can
take
a
much
longer
time
and
slow
down
the
whole
forwarding
soon.
Okay,
and
that's
why
it's
very
low
level
operations
and
that's
awesome.
Thank
you.
C
Yeah
I
understand
that
your
point
from
another
Computing
point
of
view.
You
always
need
some
data
tool
for
computing
and
those
data
are
usually
also
conveyed
by
some
pacts.
Otherwise,
how
we
can
do
another
Computing,
so
it
seems
that
don't
see
honesty
is
I.
Don't
see
a
lot
of
difference
between
this,
these
use
cases
and
in
for
in
computer
internet
computing.
I
Right
well,
the
Computing
always
have
always
needs
some
data.
Well,
the
difference
is
that
you
fetch
data
from
somewhere
else,
or
the
data
comes
with
the
instructions
right.
A
Yeah
I
was
thinking,
maybe
it's
worth
taking
this
offline
or
taking
it
to
the
list
again,
because
this
is
kind
of
new
work.
It
would
be
maybe
interesting
to
to
bring
into
the
list
the
next.
So
thank
you
very
much
again,
Eugene
and
so
Stefano
is
now
going
to
present
us
and
other
another
machine
learning
for
networking.
A
Actually,
that
seems
to
be
a
theme
today
and
actually
knowing
what
I
do
in
the
rest
of
my
life
I
think
it's
a
theme
for
many
people,
so
Stefano,
please,
I'll
I
will
grant
you
the
screen.
J
Thank
you.
Thank
you,
so
welcome
everybody.
So
I'll
present
this
update
on
machine
learning
for
networking
use
case
for
EIP
extensible
impact
processing.
This
is
a
work
jointly
done
by.
F
J
And
some
colleague
of
Mines
at
University
of
Bronto
vergada,
then
Swami
at
this
Stanford
University
and
Muhammad
shabbats
at
poor,
University
and
bakuri.
J
So
in
the
previous
presentation
here,
according
to
Shara
and
Shabbat,
had
presented
the
sorry,
yeah
I
presented
a
solution
for
per
packet
machine
learning,
inference
using
Taurus.
So
in
this
work
we
will
extend
the
the
Taurus
solution,
which
is
based
on
a
single
node
to
a
distributed
architecture
in
which
we
have
a
feature
extraction,
separated
from
the
process
of
machine
learning
the
inference
for
this
a
distributed
architecture.
J
Because
we
want
to
keep
the
idea
of
having
a
per
packet
machine
learning
inference
that
we
want,
for
example,
to
detect
anomalies
in
a
very
small
time,
time
frame
that
window,
and
then
we
thought
to
use
the
EAP
mechanism
to
transmit
this
encoded,
the
fissure
from
OneNote
to
to
another.
So
a
very
short
entry,
slides
I
will
just
recall
the
the
Taurus
architecture
for
a
machine
for
machine
learning,
and
there
is
that
there
are
several
application
of
machine
learning
in
networking
like
a
normal
infection,
a
traffic
classification
or
congestive
control.
J
You
see,
and
all
these
most
of
these
application
really
are
good
if
they
are
applied
on
a
pocket
propagate
basis,
but
usually
it's
it's
very
hard
for
the
processing
requirements
to
do
machine
learning
inference
on
a
packet
by
pocket
base.
This
is
why
the
Taurus
solution
has
been
has
been
proposed
and
in
this
Taurus
solution,
it's
a
as
Rich
pipeline,
which
includes
a
machine
learning
inference
engine.
J
So
the
idea
is
it's
kind
of
the
extension
of
programmables
which,
like
tofin
architecture
in
which
you
have
you,
see
the
the
normal
pocket
parsing,
the
processing
of
packets,
based
on
Match
action
tables
and
in
addition,
in
the
tower
switch
pipeline,
there
is
a
a
machine
learning
inference
engine
that
can
take
that.
Take,
for
example,
classification
decision
based
on
the
features
that
are
extracted
by
the
previous
stages
of
packet
processing,
and
what
have
been
shown
in
the
towers
in
a
paper
is
that
this
can
work
at
line
rate.
J
So
it
is
really
possible
to
apply
a
classification
at
the
library
with
this
as
Rich
architecture,
so
the
in
the
paper
they
show
a
single
node
model
in,
in
which
you
have
a
single
Reach
For
example,
which
receives
the
packets
at
line
rain.
These
packets
are
parsed
and
pre-processed,
because,
usually
you
cannot
run
the
inference
only
based
on
the
information
that
is
contained
in
the
single
packet.
J
You
have,
for
example,
to
do
something
like
collecting
per
floor
fissures,
so
you
should
count
how
many
packets
of
the
same
flow
have
been
received
in
the
last
10
seconds
or
you
you
do
this.
You
need
to
do
this
type
of
processing
that
we
call
it
the
fissure
extraction.
Okay,
so
in
the
single
node
model
you
prefer,
the
node
performs
the
fissure
extraction
using
a
traditional
architecture
like
a
Tofino
basic
architecture.
Then
these
features
are
extracted
and
they
are
handed
over
to
the
machine
learning
inference
engine
which
can
perform
underlying
rate
the
machine
learning
inference.
J
So
in
this
work
that
I'm
reported
reporting,
we
want
to
generalize
this
idea
and
we
think
that
in
a
network
context,
we
can
have
this
approach
with
a
distributed
feature
extraction
and
machine
learning
inference,
and
there
can
be
several
scenarios
in
which
it
can
be
useful
to
extract
some
feature
in
one
node.
Then
encode.
The
feature
transmit
them
to
another
node
that
can
perform
the
machine,
learning
inference
and
a
typical
example
can
be.
J
You
may
have
a
data
center
in
which
you
have
an
aggregations
which
and
this
again,
which
is
receiving
traffic
from
thousands
of
servers,
and
each
server
has
maybe
tens
of
virtual
machine
inside,
and
it
is
not
possible
for
this
node
to
perform
the
fish
restraction
for
all
these
Flows
In
Parallel,
because
this
requires
a
lot
of
State
information
to
to
do
the
extraction
of
fissure
for
each
single
flow.
So
there
are
this
scenario,
which
is
very
useful
to
have,
for
example,
or
every
single
server
to
stretch
feature.
J
Then
you,
you
need
to
transmit
the
feature
to
to
the
to
the
switch
and
because
in
the
switch
you
have
the
hardware
that
is
capable
of
running
the
machine,
the
ml
inference
at
line
line,
speed.
Okay.
So
this
is
a
a
more
General
generic
Vision
that
we
will
have
some
nodes
that
are
capable
of
such
extracting
fissure.
They
will
transfer
transmit
this
feature
to
other
nodes.
That
could
be
typically
switches.
These
switches
are
capable
of
running
machine
learning,
but
they
are
also
capable
of
again
a
structuring
feature.
J
So
there
could
be
a
two
two
layers,
two
levels,
let's
say
of
a
extraction
feature
before
using
the
machine
learning
inference,
or
even
this
feature
can
be
evaluated
by
one
node
and
sent
to
another
node
for
for
farther
processes.
So
this
is
a
rather
General
model
that
we
are
proposing.
J
So
now
we
there
is
the
problem,
how
we
can
encode
and
transmit
these
extracted
features
from
one
node
to
another,
and
now
the
solution
that
I
have
presented
in
a
previous
presentation,
which
is
called
an
extensible
inbound
processing,
comes
into
play.
So
I
already
presented
that
with
the
extensible
inbound
processing,
we
want
to
put
information
in
the
IPv6
header
using
the,
for
example,
Hopper
option,
and
this
is
a
generic
container
for
several
use
cases
that
I
have
mentioned
in
my
previous
presentation.
J
So
this
encoding
of
a
feature
for
machine
learning
can
be
just
seen
as
a
new
use
case
for
the
proposed
EIP
mechanism
and
in
particular
we
I
will.
We
are
defining
a
record
that
we
call
the
encoded
fissure
representation
or
efr
record
okay
that
can
be
transmitted
as
an
information
element
inside
of
this
proposed
EIP
option.
So
basically,
we
have
a
framework
which
is
generic.
J
Then
we
are
making
some
some
consideration
that
the
encoding
of
this
feature
needs
to
be
a
very,
very
efficient
because
we
are
putting.
We
are
adding
information
in
the
data
plane,
so
we
we
prefer
to
to
to
have
a
representation
in
a
record
which
is
just
a
plain
array
of
bytes,
with
no
explicit
tagging
of
the
features.
J
And
so,
of
course,
we
need
to
find
an
agreement
between
the
sender
and
the
receiver
of
the
information
to
specify
what
is
the
content
of
the
features
that
have
been
transmitted
because
there
can
be
different
applications,
different
anomaly
detection
scenarios
and
so
the
set
of
features
that
could
be
transmitted.
That
can
can
change,
and
so
we
are
defining
this.
J
This
simple
solution
that
there
will
be
some
identifiers
agreed
between
the
sender
and
the
receiver,
and
this
identifier
will
specify,
which
is
the
structure
of
the
the
the
record
that
is
the
transmit.
So,
for
example,
if
you
have
a
given
identifier,
then
we
are
transmitting
this
feature
number
of
packets
per
flow
number
of
bytes
and
the
the
for
each
of
this
feature.
J
Of
course,
we
know
what
is
the
length
of
the
of
the
information
that
is,
that
is
encoded
in
this
in
this
in
this
record,
so
we
we
can
discuss
how
to
propose
a
standard
or
rather
a
framework,
so
we
don't
need
really
to
to
to
to
go
with
with
a
standard
in
the
in
the
complete
meaning.
J
We
believe
that
we
do
not
need
to
standardize
the
content
of
this
record,
so
the
different
application
will
be
free
to
choose
which
feature
needs
to
be
transmitted,
so
we
want
to
leave
The
Innovation
open,
of
course,
in
this
exchanging
of
feature,
but
maybe
it
can
be
useful
to
to
standardize
to
use
a
common
framework
to
exchange
this
efr
Records
among
the
nodes,
and
in
particular,
we
think
that
this
could
be
a
good
use
case
for
the
EIP
option
to
Define
any
information
element
that
basically
just
includes
an
identifier
of
the
of.
J
Okay,
so
this
is
a
very
early.
This
is
an
update
and
it
just
includes
very
early
ideas
and
this
the
the
idea
of
Distributing
the
fissure
extraction
and
the
Machine
learning
inference
open,
UPS,
several
research
issues,
so
we
are
just
starting
this
this
activity,
and
so
we
we
welcome.
We
welcome
comments
and
discussion
on
this
on
these
ideas.
A
I
J
Yes,
I
I,
like
to
we
are
now
working
on
a
position
paper
describing
these
things
and
then
I
will
extend
the
EIP
draft.
We
have
some
draft
of
EIP
that
I
will
include
the
description
of
this
use
case
and
I'll
be
pleased
to
to
submit
to
this
updates
on
the
when
they
are
ready
on
on
the
mailing
list
of
coin
and
receive
receive
comments
and
I.
J
I
think
that
this
is
a
an
an
example
of
of
the
future
restriction,
in
particular,
is,
is
an
example
of
a
protesting
that
is
done
by
inside
the
network
by
the
network
node
to
extract
this
feature,
so
I
believe
it's
relevant
to
the
activity
of
this
of
this
group,
but
first
I'm
open
to
to
to
comments
and
then
about
it.
Yeah.
A
Okay,
Jeffrey.
C
Yes,
so
first,
this
is
I
think
that
this
is
a
quite
interesting
topic,
because
stamina
is
talking
about
how
AI
can
be
used
for
Network
all
right.
So
previous
several
plantations
or
talks
are
discussing
how
Network
for
AI
but
I
have
a
question
for
you
Steph.
No,
so
you
met
you,
try
to
separate
the
feature
exchanging
and
the
Machine
learning
influence.
Yes,
understanding
about
the
Deep
learning
is
that
usually
they
integrate
the
representation
learning
inside
so
those
feature
extension
are
some
hidden
States
inside
the
machine
learning
model.
C
So
you
have
some
special
specific
application,
especially
specific
use
case
or
Solutions,
so
that
we
can
separate
each
exchanging.
J
Yeah,
this
is
a
very,
very,
very
interesting
question.
Yeah
I
I
think
that
the
use
case
as
a
there's,
a
specific
characteristics
that
may
be
made
different
from
a
traditional
machine
learning,
because
maybe
in
traditional
machine
learning,
as
you
say,
you
have
a
full
data
set
and
you
want
to
automatically
discover
the
feature
as
so.
J
You
do
not
want
to
pray
choose
your
feature,
but
we
have
the
the
impression
that
this
this
will
not
scale
that
it's
very
difficult
that
it
just
based
at
the
on
the
raw
data
and
the
road
that
are
just
the
packets
that
are
flowing
in
a
node.
Then
you
are
able,
in
a
scalable
way,
to
do
a
good
machine
learning
inference
and
the
the
existing
solution
for
machine
learning
for
networking
they
are
already
based,
for
example,
on
on
floor
level
features.
They
are
not
the
analysis.
J
Fighting
the
single
packet,
so
I
think
that
there
is
already
this
trend
that
you
need
to
extract
the
feature
before
you
can
run
a
a
machine
learning
inference.
But
I
agree.
It's
a
very
interesting
question
and
it's
a
very
interesting
issue,
because
if
you
choose
the
wrong
feature,
then
you
lose
the
possibility
of
doing
a
good
algorithm.
So
I
I
agree.
That
is
a
a
an
open
research
issue.
It's
one
of
the
open
research
issues
that
I
mentioned
at
the
end
of
the
presentation.
C
A
I
will
declare
that
we're
on
time
and
and
that
we
can
most
essentially
close
the
meeting
you
probably
have
seen
that
we
requested
a
slot
in
London
I,
don't
know
who
is
going
to
be
there
in
person.
I
know
that
myself
won't
be
there,
but
we'll
see
you
know.
Hopefully,
people
will
be
able
to
meet
in
person.
I
will
still
be
virtual
for
a
while,
and
so
we
will
produce
some
notes.
A
I
took
notes
on
the
side
that
will
coordinate
with,
but
Jeff
about
how
we
do
that,
and
thank
you
very
much
to
all
the
presenters
I.
Thank
you
especially
to
people
in
Jeffrey's
time
zone,
where
it's
currently
extremely
late.
Thank
you.
So
very
much
everybody
I
really
appreciate
it.
Jeffrey
appreciate
it
and
again,
Eve
sends
her
best
regards
and
she's
also
she
couldn't
join
us
today,
but
hopefully
she'll
be
there
at
least
virtually
in
London.
Thank
you
very
much.