►
From YouTube: COIN Interim Meeting, 2020-04-07
Description
COIN Interim Meeting, 2020-04-07
A
Again,
beware
that
this
meeting
is
being
recorded
and
so
and
it's
I
recording
it
in
the
cloud
and
if
anybody
is
interested
in
recording
I
will
take
it
in
a
bowl
or
we
will
make
it
available,
and
so
anyway,
reminder
that
you
know
we
should
have
a
fuzzy
feeling
about
participating
in
this.
You
know
we're
an
open
community
and
we
want
everybody
to
feel
that
they
have
a
voice
and
that
there
are
going
to
be
heard
as
much
as
anybody
else.
A
We
have
obviously
all
eating
materials.
I
think
we
have
most
of
three
I
think
only
missing
one
presentation,
so
everything's
there
there's
a
network
pad
for
people
who
are
on
the
on
the
call
right
now,
maybe
even
send
message
on
the
satsang
record,
your
name
in
the
effort
pod,
because
we're
going
to
see
that
the
blue
sheets.
For
the
moment
my
XMPP
client
doesn't
work,
but
if
people
want
a
jab,
that's
fine
and
all
sessions
are
recorded.
Take
your
video
off
because
it's
very
done
the
bandwidth
and
second,
it's
a
bit
disturbing.
A
Keep
yourself,
muted,
okay
for
the
mic,
the
McCue,
the
plus
Q
absolute
unit,
the
Q,
the
the
q
removes
you
I
will
tell
you
that
I've
been
in
two
or
three
meetings.
We've
had
this:
it
works.
Okay,
but
I
think
it's
it
needed.
It
would
need
a
better
thing.
She
accused,
but
that's
fine
I
was
thinking
of
using
Twitter,
but
we
do
that.
That's
fine,
that's
fine!
And
so
again,
please
use
that
yeah
her
pad
to
to
record
your
name.
A
So
I
love
the
the
comment
of
Eve
actually
added
the
red
thing
here.
Yes,
we
we
should
make
this
more
excitement.
Maybe
it's
a
goal
for
the
next
IETF
to
or
to
the
next
interim,
to
find
something,
that's
more
exciting
than
just
saying
we
foster
research
and
computing
and
and
network.
Maybe
we
want
to
do
something
better,
but
for
the
Hmong,
that's
what
we
do
so
I
think
with
all
the
development
that's
happening
in
this
in
this
field,
though,
I
agree
with
Eve
that
we
could
make
them
that
a
little
bit
more
exciting.
A
B
A
I
think
exciting.
It
is
a
good
thing,
but
yeah
I
think
it's.
Maybe
it's
a
bit
too
vanilla.
You
know
we
have
to
remember
that
we
wrote
this
when
we
were
just
a
proposed
research
group
and
now
that
there's
more
buy-in
but
anyway,
so
our
goal
right
now
is
fostering
research
and
computing
in
a
network
and
our
focus
is
pretty
wide.
A
A
A
Thank
you
very
much,
so
obviously
we're
you
know
we're
halfway
through
this
zero
item.
Then
we
have
a
number
of
research
presentations
to
actually
come
from
a
conference
I
attended
at
the
end
of
February,
which
isn't
right
now
seems
a
long
time
ago.
One
is
about
some
recent
work
that
I've
been
doing
with
Erickson
and
Princeton,
with
the
help
of
Eve
and
a
lot
of
things
related
to
the
one
data
model,
because
a
lot
of
things
related
to
having
computing
network
works.
A
A
Then
we
have
our
draft
updates,
which
are
the
drafts
that
are
active
right
now.
So
we
have
the
micro
services,
the
security
port,
some
requirements
from
China
Mobile
and
an
update
again
on
its
discovery
again
related
to
this
idea
of
putting
computing
a
network
and
finding
where
the
data
or
eventually,
where
all
the
resources
are.
A
We
want
to
have
again
we're
going
to
conclude
with
some
ID.
Do
we
want
to
have
some
research
group
items
we
gotta
prep
for
mid
I?
Guess
you
guys
saw
I,
don't
know
if
you
saw,
but
there
is
a
message
going
around
that
with
what's
happening.
There
is
contingency
plan
if
we
do
not
go
to
Spain
in
the
summer,
so
we
can
actually
having
something,
maybe
virtually
in
June
and
then
open
discussion.
If
anybody
else
has.
A
Jared
two
dead
coats
I
just
wanted
to
do
a
small
update
because
it
was
very
related
to
this.
There
was
a
conference
again
like
I,
said
five
six
weeks
ago
in
Paris
and
part
of
it
was
in
that
proc
workshop,
and
it
was
basically
about
p4
applications
or
services
and
and
two
papers
from
that
conference.
I
invited
them
to
what
we
invited
them
to
this
meeting.
A
That
was
a
side-lying
discussions
on
p4
and
Tofino
architecture,
because
we
had
somebody
from
Intel
barefoot
and
it
started
this
discussion
about.
How
do
we
want
to
do?
Maybe
filtering
and
processing
your
packets
at
the
edge
beyond
just
what
is
done
in
the
header,
which
was
and
also
could
we
do
multi
stream
or
like
some
kind
of
multi-threading,
which
is
something
that
Eve
had
talked
about.
So
there
was
that,
and
there
was
a
tutorial.
A
That
was
questions,
though,
and
because
again
so,
when
we're
talking
right
now,
a
lot
of
times
we're
talking
about
IOT
and
a
lot
of
edge
networking,
but
I
think
we
have
to
be
careful
to
say
that
we
are
looking
at
the
whole
network.
We're
looking
at
both
looked
ok
functions
in
the
edge
in
the
cloud
and
in
between
there
was
some
dissent
invoice
for
me,
I
think
he
said
chief
engineer
somewhere
in
Germany
on
locating
the
computing
in
the
switch
fabric
and
that
actually
and
for
him
the
rest
would
was
not
in
the
network.
A
I
think
he
was
the
only
one,
but
he
raised
the
good
aspect
and
the
reason
I
put
it
there
is
that
he
raised
the
aspect
of
adding
computing
inside
the
actual
code.
That
is
running
switches,
and
that
is
interesting
cuz.
It
was
not
before
it
was
not
anything
another
switch.
He
was
trying
to
do
things
in
the
current
switch
fabric
and
in
the
current
switch
code-
and
you
know
research
interesting.
A
Obviously,
the
whole
thing
was
that
obviously-
and
that's
what
I
said
that
maybe
in
our
goals,
we
should
look
at
lead
at
the
impacts,
because
it
was
seen
as
a
major
emerging
trend
in
the
network
and
because
of
that,
the
next
year
conference
is
going
to
be
about
this
melding
of
networking
and
and
computing.
The
theme
of
the
whole
thing,
and
so
this
should
be
very
interesting
and
I've
started.
Thinking.
I
know
that
Melinda
sure.
A
As
raised
this
idea
of
made
it
maybe
co-locating
interim
meetings
with
conferences
and
because
of
the
timing
of
this
one,
we
could
actually
think
of
maybe
hosting
and
face-to-face
once
we
actually
have
face-to-face
meetings
again
collocated
with
with
I
CIN
next
year,
and
but
you
know
this
is
really
far
in
the
future.
But
that
could
be
one
thing
and
having
a
combination
of
face-to-face
and
virtual,
like
we've
done
for
a
network
coding,
for
example,
with
meetings
and
MIT
and
everywhere.
So
that's
it.
C
C
So
so
boobies,
so
the
last
one.
So
nothing
is
the
latest
one,
so
it
says
the
new
after
the
Singapore
meeting
about
security
and
the
privacy
I
think
this
is
good
to
adjust
the
question
raised
during
the
IAB
review
meeting
right.
So
maybe
five
of
these
jobs
we
are
be
introduced
the
today
next
next
page.
Please
next
slide,
and
these
two
in
fact
expired
super
Mauritius,
a
and
the
mind
addressing
the
used
cases
in
external
reality
and
the
dision.
C
B
Okay,
being
cautious
or
at
least
observant
of
the
time,
and
that
we're
we
don't
have
that
much
of
it
I
will
quickly
say
that
we're
pretty
much
right
on
our
milestones
that
we
created
when
we
were
first
launched
or
even
before,
and
as
you
could
see
from
the
internet
drafts,
we've
got
a
pretty
healthy
set
of
topics
that
that
meet
the
milestones.
If
you
go
to
our
website,
you
can
sort
of
see
how
we've
clustered
the
Internet
drafts
under
the
milestones.
B
But
the
point
is
that
you
know
we
focused
on
landscapes
and
challenges,
directions
and
requirements
where
many
of
these
things
have
been
addressed,
sort
of
indirectly,
and
so
some
of
what
we
would
like
to
accomplish
in
the
next
round
is
to
really
hone
the
milestones
and
so
I
think
that's
what
the
that's.
What
the
question
marks
relate
to.
B
So
the
the
big
work
ahead,
as
you
can
see
from
the
question
marks,
is
really:
how
do
we
scope
this
now
that
we've
been
blessed
as
a
research
group
is
how
do
we
really
appropriately
scope
this
and
make
some
headway
towards
that's
that
goal?
We
I
think
we
should
launch
into
our
presentations
at
this
point.
Cuz
I
think
we're
already
off
our
schedule.
So
why
don't
we
do
that
without
further
ado
and.
A
A
A
F
D
D
You
hear
me
yes,
okay,
thank
you.
So,
let's
first
take
a
look
at
the
scenario
we're
dealing
with
in
the
last
year's
you
struggle
to
improve
their
performance
because
of
the
end
of
Moore's
law
and
they're
scaling,
but
still
we
need
acceleration
for
so
many
different
tasks
in
this
time.
So
the
new
architectural
approach
we
try
to
follow
is
the
one
of
the
so
called
domain-specific
architectures,
which
are
architectures
that
rather
than
general-purpose
CPUs,
are
tailored
to
a
specific
domain
of
applications,
they're
programmable
and
they
can
be
power
efficient.
D
F
D
General-Purpose
CPUs,
as
you
can
see.
Well,
this
vision
works.
For
example,
we
get
a
further
four
orders
of
magnitude
in
improvement
in
throughput
by
putting
excess
consensus
protocol
in
the
network
and
we
get
a
five
times
gain
in
power.
Consumption
by
your
floating
network
functions
dedicated
hardware.
So
this
working
shirt
is
about
the
opportunity
to
offload
MapReduce
kind
of
tasks.
Stateful
data
planes.
D
We
try
to
find
the
common
requirements
for
these
kind
of
test
hardware
and
find
out
that
these
kind
of
data
planes
can
achieve
low,
latency
and
low
congestion
processing,
and
we
first
validate
our
approach
through
a
first
use
case.
So
first,
some
background.
Mapreduce
is
a
programming
model
which
was
proposed
by
Google
back
in
2004.
D
Basically,
newer
programming
models
in
the
field
of
data
stream
processing
are
no
more
than
a
superset
of
the
MapReduce
programming
model
for
the
math
phase,
basically
processor,
to
a
generic
input
and
generating
and
intermediate
key
value
pair,
which
is
then
sent
to
the
face.
We
have
multiple
map
instances.
Each
is
receiving
a
split
of
the
incoming
data,
well,
the
next,
whose
face
basically
merges
the
intermediate
values
within
the
same
key
and,
of
course,
we
have
also
multiple
resistances
and
each
receives
a
partition
of
this
key
space.
D
As
a
basic
example,
we
can
see
here
were
count
where,
basically,
we
want
the
occurrences
for
letters
in
attacks.
You
can
see
them
here
and
there
basically
splitting
it
into
the
different
map
phases
and
each
map
phase
produces
an
output
of
the
key
which
is
seen
and
the
number
1,
because
we're
basically
culling
one
or
currency.
For
that
letter,
which
is
then
sent
to
the
next
reduce
phase
which
basically
performs
a
plus
1
operation
for
every
key
and
as.
G
D
D
D
D
Of
operations
like
main
mass
or
maximum
well
they're,
okay,
and
if
we
take
a
look
at
these
two
phases,
we
basically
find
out
that
the
first
phase
is
basically
stateless.
It's
no
more
than
I
imagine
table
well,
the
second
one
is
stateful,
because
we
actually
have
to
keep
memory
of
the
registers
per
each
different,
reduce
flow,
and
so
what
we
ask
ourself
is:
if
is
there
already
a
hardware
MapReduce
X
computer
turns
out?
We
already
have
one
which
is
called
flow
blades,
and
it
was
presented
by
my
networking
group
at
an
SDI
last
year.
D
It
is
a
stateful
programmable
data
plane
used
both
for
software
and
marnix
as
a
network
functions
accelerator,
it's
basically
a
pipeline
of
stages,
both
stateless
or
stateful,
which
our
firm
performs,
extended.
Final
state
machines
for
analogies
and,
in
this
case,
processing
is,
is
restricted
to
a
very
few
clock
cycles,
so
in
their
orders
of
seconds,
which
means
six
orders
of
magnitude
better
than
the
corresponding
software
executors,
which
are
bound
to
milliseconds.
D
So,
basically,
we
select
the
index
of
our
registers
through
to
hash
functions,
but
we
we
don't
get
an
Universal
way
to
resolve
these
kind
of
collisions,
which
is
really
use
case
depending
we
chose
flow
blaze
because
it
manages
to
handle
collisions
in
a
very
transparent
way
for
the
for
the
user.
Also.
F
D
Mean
we
could
have
done
is
also
in
p4,
but
we
which
all
these
other
paths
for
for
this
talking
about
net
replacement
for
these
devices.
We
know
the
med
MapReduce
really
exploit
parallelism
on
many
different
nodes
and
we
propose
the
same
architectures
for
our
nodes
in
the
network,
for
example
in
our
topology
in
factory.
But
what
happens
if
we
have
few
hardware
device
with
two
different
solutions?
And
one
is
the
reroute
of
the
traffic
to
our
flow
blade
distances,
or
we
can
use
our
reasons
as
a
smart.
H
D
D
What
happens
here
is
basically,
we
have
a
map
produced
tasks,
moving
packets
in
the
network
and
computing
three
different
metrics,
which
are
the
number
of
user
sessions
per
TCP
user.
We
basically
count
sessions
and
the
average
number
of
clicks
per
sessions,
which
were
basically
counting
the
number
of
HTTP
GET
request
per
session
and
the
average
session
duration.
D
D
Link,
as
you
can
see,
we
wanted
to
see
measurements
about
to
work,
load
scaling.
We
have
starting
from
4k
until
512
K
and
the
parameters
for
these
tests
were.
We
had
20
get
requests
per
session
and
the
average
just
average
session
time
was
140
millisecond,
we
saturated
a
tank
a
bit
linking
we
measured,
no
losses
and
our
plate
instance
was
running
on
a
single
CPU
clocked
at
2
gigahertz
and,
as
you
can
see,
it
correctly
measures
the
matrix
we
we
wanted.
D
And
so
what
we
decided
to
do
next
is
to
downgrade
the
CPU
clock
cycle
to
1.8
gigahertz,
because
we
wanted
to
measure
some
losses
and,
as
you
can
see,
the
average
session
duration
increases
here,
a
with
the
with
the
square
dots
in
blue
because
of
longer
queue,
delays
and
also
we
have
some
losses
and,
as
you
can
see,
also,
the
average
number
of
HTTP
starts
is
to
increase
here
because
between
32
k
and
64
case,
because
we
started
how
to
lose
some
packets
and
so
get
requests
needs
to
be
registered.
And
so
we
metrics.
A
D
For
the
future
work,
we're
planning
to
integrate
the
our
Excel
to
chain
with,
basically
is
the
training
we
used
to
program
such
devices
in
a
MapReduce
environment.
We
would
like
to
implement
more
and
more
applications.
We
would
like
to
execute
these
tasks
on
the
same
hardware.
Concurrently,
we
would
like
to
use
our
devices
as
basically,
if
doing
function
as
a
service,
and
also
we
would
like
to
further
complete,
compare
flow
blazoned
before
through
the
before
tonight,
fpga
workflow
and
so
yep.
That's
it
and
I'm
happy
to
take
any
questions.
I.
A
A
A
J
So
before
dns
is
a
implementation
of
a
dns
within
a
p4
switch
and
the
idea
is
that
it's
difficult
and
they
were
competing
idea.
You
bring
potentials
closer
to
the
use,
to
try
and
reduce
the
latency
in
particular,
although,
although
we
also
found
significant
throughput
in
it's,
so
we
developed
before
DNS
using
P
for
tonight
of
PGA
and
found
15
x,
throughput
improvement
and
a
hundred
times
latency
reduction
over
a
software-based
name
server.
J
So
here's
a
data
center
network
with
some
bits
added
to
for
for
a
DNS.
You
know
I
did
you
have
a
DNS
request
and
the
first
one
goes
all
the
way
over
the
internet
to
administer
on
the
other
side
of
the
Internet
and
then
subsequent
ones
are
accelerate.
The
DNS
server
and
P
for
DNS
just
gets
put
in
the
in
the
rack
and
with
the
switch
and
the
first
request
still
has
trouble
with
the
internet.
But
subsequent
requests
can
just
get
accelerated
within
the
rack.
J
J
The
the
idea
is
a
packet
comes
in.
We
run
some
packet
checks,
you
know,
is
it
a
DNS
request?
If
it
is
easy,
does
it
have
the
right
number
of
requests,
because
before
DNS
only
handles
one
request
at
a
time?
Is
it?
Is
it
in
a
record
request,
or
is
it
some
kind
of
MX
request
or
something
that
we
don't
handle,
that
kind
of
checks
to
make
sure
we
can
actually
handle
it?
J
J
Decend
DNS
responses
top
plan
so
that
so
the
lookup
tables
can
be
updated
and
of
requests
are
also
sent
to
the
control
plane
so
that
those
can
be
executed
in
software
and
then
the
responses
are
forwarded
back
the
control
plane
handles
the
other.
All
the
other
immutability
issues
surrounding
this
so
it'll
handle
the
table
being
over
filled
in
the
data
plan
will
handle
TTL
updates.
J
J
And
they're
there
some
issues
surrounding
this,
particularly
with
the
control
plane,
becoming
a
bottleneck
because
in
the
in
this
implementation,
we've
separated
p4
data.
Sorry
that
p4
it's
a
net
FPGA
separates
the
mutability
into
the
data
plane
and
control
in
leaving
data
plan
to
do
non
mutable
things,
and
so
when.
J
J
But
when
you
start
moving
to
application
layer
protocols
where
mutability
is
kind
of
central
to
a
lot
of
their
memcache
D
is
something
comes
to
mind.
You
know
where
we're
really
it's
kind
of
it
is
really
you
write
to
this
thing
and
you
expect
it
to
change
and
then
so
mutability
is
kind
of
or
concept
in
it.
It
does
work
as
well.
We
were
finding
that
on
our
machine,
the
control
plane
was
a
significant
bottleneck,
even
even
with
the
DNS
protocol
and
then
further
to
this.
Existing
protocols
are
designed
for
software.
J
The
conclusion
here
is
is
that
partial
implementations
can
work,
because,
although
we
don't
support
everything
in
p4
DNS
and
although
we
wouldn't
expect
to
be
for
DNS
throughput
and
latency
improvement
over
over
a
software-based
name
server
and
in
part
that
is
because
we
aren't
implementing
everything.
I
would
not
expect
a
full
implementation
of
DNS
to
to
perform
anywhere
near
as
well.
Even
in
hardware,
there
were
some
other
limitations
surrounding
p4.
What
is
the
field
length?
J
This
isn't
particularly
relevant
for
DNS,
because
we
would
expect
to
be
able
to
because
that's
like
a
40
character
domain
name,
that's
plenty,
but
for
something
you
know.
Maybe
you
have
a
thousand
24-bit
hash
that
some
suddenly
it
starts
looking
a
bit
more
like
a
restriction,
and
secondly,
we
found
that
the
complex
parsing
state
machines.
You
were
using
excessive
resources.
So
this
is
the
state
machine
that,
before
DNS
uses
to
parse
incoming
packets,
it's
fairly
simple
that
one.
J
You
know
there's
basically
one
step
at
each
Eddy
top
and
we
have
those
branch
to
handle
different
lengths
of
domain
names,
but
it'll
domain
names
is
because
this
state
machine
used
up
basically
all
of
our
Hardware
resources
when,
as
I
said
below
a
simple
bit
stream
would
have
been
enough.
I
can
understand.
You
know
it's
very
clear
why,
in
some
cases
you
need
a
state
machine,
they
can
do
things
like
recursion.
If
you
want
to
be
able
to
handle.
J
J
A
B
These
limitations,
what
next
the
promise
and
the
limitations?
What
next.
J
J
J
K
J
That
the
field
length-
and
some
someone
may
be
able
to
correct
me
but
I,
believe
the
field
limitation
is
generic
P
for
limitation
and
I.
Think
the
complex
parts
machine
I'm
really
not
sure
about.
It,
wouldn't
surprise
me
that
if
the
x86
implementation
was
slower
using
a
then
it
would
be
just
using
a
simple
bit
stream,
but
but
I
suspect
it
does.
It
I
mean
it
certainly
wouldn't
have
the
same
problem
of
you
on
your
computer.
You
know
you
can't
fit
it
on
your
FPGA
that
we
ran
into
with
with
larger
state
state
machine.
A
A
L
So,
thank
you
Mary
yo,
so
they
said,
aramis
I
am
working
in
Ericsson.
Research
in
Finland
I
have
been
working
with
Roberto
Morabito
from
Princeton,
also
Mayo
and
and
Eve,
to
take
a
little
bit
a
step
back
and
try
to
see
the
things
from
my
perspective,
lots.
So
we
are
trying
to
look
at
the
whole
continuous
fabric
of
computation
and
what
would
actually
mean
if
we
start
to
take
in
account
the
whole
the
whole
picture
components
together.
L
The
third
first
a
little
bit
about
architectures
in
for
distributed
computing
in
general
I
mean
D.
This
is
not
pictures
that
you're
more
than
familiar
in
the
sense
that
that
when
we
have
a
centralized
architecture,
so
basically
we
have
a
central
cloud
where
we
have
applications
that
are
connected
to
data
or
even
processes
that
are
in
that
center
cloud.
We
have
data
sources
in
all
the
data
in
that
centralized
environment,
which
is
basically
an
execution
environment
and
then,
basically,
the
idea
is
that
everything
is
needed
to
be
control
unconnected
in
the
cloud.
L
So
that
means
that
if
we
talk
about
interoperability
well,
basically
what
you
need
to
take
care.
Is
this
binding,
so
how
you
actually
can
interact
with
that
cloud
and
whatever
it
happens
here,
it
doesn't
need
to
be
so
much
interoperable
with
any
other
systems,
because,
basically
you
take
care
of
everything
here.
L
We
move
towards
what
is
happening
nowadays,
which
is
going
to
traverse
the
edge.
Then
we
end
up
with
kind
of
decentralized
architectures,
where
we
still
have
clouds,
we
call
them
today,
eh
or
fox
or
or
and
and
then
they
connect
still
today
to
the
data
sources.
They
also
produce
some
of
the
execute
and
and
then
they
might
connect
to
to
another
center
cloud
which
either
in
a
hierarchical
way
or
peer
way,
serving
still
applications.
L
These
applications
might
also
utilize
some
of
that
edge
resources
as
well.
So
in
a
nutshell,
what
we
are
doing
is
like
a
slice,
is
launching
this.
This
big
cloud
in
a
smaller
this
is
which
it
requires
now
some
additional
interoperability,
because
this
class
needs
to
talk
to
each
other.
They
most
probably
need
to
distribute
processing
between
themselves
and
then
also
share
data.
So
then
we
are
started
to
have
a
higher
level
of
interoperability
dependencies
and
then
finally,
we
are
we
could.
L
This
is
our
arguable,
of
course,
that
that
things
are
starting
to
be
even
more
distributed,
where
the
devices
themselves
they
become
execution,
environments,
so
think
about
a
car
which
is
a
soft
driven
algorithm
run
in
there,
especially
because
AI
it's
becoming
something
that
you
need
to
reduce
the
latency
times.
You
need
to
also
probably
for
privacy
reasons
or
even
practical
reasons.
L
You
cannot
be
moving
data
all
the
time
everywhere,
so
the
execution
is
done
in
the
in
the
end
devices
themselves
also
the
datasources
they
become
even
more
atomic
like
it
could
be
parts
of
the
sub
system
and
at
the
end
the
applications
themselves
can
be
running
in
the
same
device,
so
I
mean
if
we
we
are
talking
about
a
self
driven
car.
So
basically
the
application
is,
is
driving,
I
mean
that
that
is
happening
in
the
car.
L
Mapping
to
other
peers
as
well,
so
basically
we
have
a
topology
where
everything
gets
mix
and
then
the
interoperability.
It
becomes
really
one
of
the
problems
and
then,
if
we
are
started
to
think
about
intelligence
like
one
of
the
let's
say
the
the
engines
for
this,
it
means
that
if
I
need
to
bring
intelligence
in
each
of
these
components,
how
can
I
do
it
in
a
way
that
is
interoperable,
that
they
can
talk
to
each
other?
L
L
Aspects
of
this
and
I
will
introduce
them
fast,
so
it
may
be
that
you
want
to
compose
things.
So
you
have
one
function
that
do
some
type
of
analysts,
another
type
of
analysis,
and
then,
basically,
you
have
another
function
that
put
together
the
output
of
those
and
and
then
give
the
final
result,
so
some
kind
of
functional
distribution.
So
that's
the
first
thing
that
to
keep
in
mind
another
one
is
the
agent,
so
an
agent
can
be
basically
itself.
L
You
could
say
self-contained
unit
that
can't
do
things
and
has
some
sort
of
mission
or
goal,
and
then
how
do
you
how
they
can
actually
interact
to
each
other?
Is
it
a
hierarchical
model
where
you
have
a
master
which
is
ordering
the
others
or
they
self
organize
in
this
wrong
way,
or
it
can
be
that
competition
between
a
certain
agents
and
then
or
there
is
a
cooperation
between
those
agents?
So
all
also
most
probably,
you
want
to
have
some
control
on
the
fact.
L
Then
the
training
of
those
systems
is
another
another
dimension
to
take
in
account.
So
if
you
are
doing
some
sort
of
function
that
it
requires,
data
which
is
local
most
probably
doesn't
want
to
extend
that
data
further
than
for
the
system.
Or
is
it
not
even
useful
for
anything
else
that
for
the
system
where
you
are
doing
the
for
which
one
you
are
doing
the
training?
L
Taking
account
you
are
able
to
anyway
process
it.
Of
course,
it
might
be
cases
where
you
want
to
have
when
I
meet
generic
hardware,
something
like
TPU
for
acceleration
still
I
would
call
that,
in
my
case,
a
generic
hardware,
because
it
would
be
something
that
you
can
put
in
any
device
and
then
it
could
run
any
type
of
algorithms
which
has
to
do
with
AI.
L
The
idea
is
like:
can
you
actually
repurpose
the
intelligence
of
a
device
without
having
to
you
know,
change
the
whole,
how
a
whole
software
and
whole
system
that
that
that
device
was
initially
been
designed
for?
In
this
case,
for
example,
we
have
a
human
recognition
and
you
want
to
change
it
for
I,
don't
know
animal
recognition.
L
Some
kind
of
functional
starts
of
what
we
would
like
achieve,
and
then
there
is
a
mapping
of
those
that
start
to
actually
fit
in
an
architecture.
And
then,
if
we
we
see
the
a
functional
stack.
What
we
like
is
like
you
have
communication
and
data
which
is
coming
from
sensing
and
tuition
capability,
so
you
can
actually
tell
the
things
what
it
should
do
and
you
get
the
calculation
for
what
you
need
to
do
and
then
try
to
then
transform
that
in
knowledge
that
it
can
be
context
or
live
filter
search.
L
You
could
then
based
on
that
feel
like
a
goal
or
a
purpose
or
intention,
and
then
the
more
we
go
higher
in
the
stack.
So
the
Agean
interaction
is
like
on
between
different
type
of
entities
which,
most
probably
they
don't
even
know
to
each
other
from
before.
Imagine
a
car
coming
to
a
road
and
then
how
they,
after
we
yeah,
and
then
that
includes
security
and
so
on.
So
all
these
needs
somehow
map
to
where
this
point
should
be
taken
and
how
they
should
be
a
handle.
L
Okay,
to
finish
this
part,
there
might
be
different
levels
of
castration
here,
so
this
orchestration
is
mainly
on
data
and
on
processing.
This
is
orchestration
mainly
and
services
and
intelligence.
We
call
it
like
that
lifecycle
management
policies
and
so
on.
We
could
also
think
about
how
many
of
you,
if
you
see
the
previous
color,
so
we
we
could
think
about.
L
We,
we
have
data
capturing,
and
then
we
had
the
communication
and
then
how
this
is
going
to
be
communicated
with
still
with
a
cloud
environment.
Using
this.
This
kind
of
common
data
layer
here
is
that
we
cover
everything
that
it
needs
to
help
the
system
to
talk
to
different
type
of
entities,
and
then
this
edge
computing
can
be,
of
course,
more
blocks
together,
for
this
is
just
an
example.
L
L
So
we
think
that,
with
that
kind
of
common
intelligence
data
layer
that
could,
if
we
added
that
to
the
architecture
that
could
solve
many
of
the
problems,
this
doesn't
mean.
This
is
a
let's
say
is
something
you
have
to
introduce
exactly
equally
or
implemented
equally
everywhere.
But
you
could
the
system
an
operating
system.
There
are
many
flavors
of
them.
It
can
be
implemented
in
many
ways,
but
it's
still,
they
mark
the
same
function
as
an
architecture
that
resonate
towards
all
the
different
implementations
of
operating
system,
and
they
can
even
talk
between
each
other.
L
Our
next
steps
is
basically
we
will
continue
analyzing
this
interoperability
requirements
and
do
try
to
do
some
kind
of
taxonomy
architecture
of
these
data
layer,
how
it
should
look
like
what
are
the
component?
What
are
the
functions
that
should
have
and
and
then
start
a
draft
on
this,
possibly
with
with
some
legacy
front
edge,
discovery,
edge
data,
discovery
yeah.
So
hopefully
we
get
a
bit
more
ordinary.
A
Sorry
I
didn't
see,
it
was
okay.
So
there's
no
questions.
Are
there
any
questions
anyway?
We
have
we're
running
out
at
a
time
but
yeah.
Thank
you
again
Edgar
and
thank
you
for
putting
that
together
in
a
very
short
time.
The
next
one
is
Michael
and
I
will.
Let
you
talk
again
we're
we're
now
moving
into
more
I
would
say
the
data
part
and
how
we
can
move
from
current
implementations
to
more
advanced
models.
So
I
will
take.
Let
you
talk
Michael
thank.
G
You
yeah
this
will
be
much
more
high-level
and
I.
Don't
have
any
you
know,
latency
or
throughput
improvements,
or
even
there's
only
a
little
bit
of
control,
plane
and
data
plane.
But
I
want
to
talk
about
the
intersection
of
the
work
we're
doing
in
one
data
model,
which
is
more
about
semantic
interoperability
of
devices,
and
you
know
I
context,
but
extending
that
work
to
to
provide
some
architecture,
support
for
network
computing
and
application
virtualization.
So
you
know
okay.
So
just
briefly.
What
well
is
on
organization
and
our
goal
is
to
harmonize
the
different
semantic
models.
G
We
find
that
there's
a
lot
of
commonality,
but
you
know
differences
in
expression
and
and
sort
of
some
of
the
details.
So
we
want
to
try
to
harmonize
those
and
we've
had
some
success
in
that
initially
we're
we're,
building
a
language
to
Anna,
metamodel
or
expressing
these
semantic
models,
and
but
eventually
we
want
to
have
some
converged
set
of
models
that
a
developer
can
just
go.
G
It
can
use-
and
you
know,
derive
interoperability
through
there's
an
encapsulation
techniques,
so
our
status
is
that
we
have
a
few
standards
organizations
that
we're
working
are
working
together,
and
this
is
sort
of
where
we've
had
some
success
in
in
bringing
these
different
organizations.
Oh
I'm,
a
lightweight
him
to
a
mandos
yeah
and
we're
beginning
to
enter
a
conversation
with
the
Bluetooth
SIG
about
their
models,
and
eventually
you
know
driving
towards
standardization,
possibly
in
IETF.
G
This
is
just
just
quickly.
What
we're
building
is
a
meta
model
that
has
some
standardized
affordances
I'll
talk
a
little
bit
more
about
how
those
work,
let's
see,
and
we
want
to
extend
the
work
initially
we're
working
mostly
on
characterizing
and
describing
one
IOT
devices.
Do
because
that's
sort
of
a
pain
point
in
the
industry
right
now,
but
we
were
very
quickly
fall
that
up
with
the
the
big
next
big
gap,
which
is
modeling
of
behavior
in
context
and
I,
think
this
sort
of
is
where
the
intersection
is
really
with
our
network
computing.
G
Is
that
we
we
encapsulate
things
and
objects
mostly,
and
we
allow
objects
to
be
composed
into
bigger
things
and
that's
a
late
model
devices,
but
the
important
thing
about
objects
is
they
they
have
these
affordances
that
we
we
model
properties,
actions
and
events
which
happen
to
line
up
really
well
with
the
way
interfaces
are
already
designed
and
a
lot
of
IOT
devices
and
a
lot
of
IOT
services,
and
then
we
have
some
reusable
data
type
stuff.
That's
really.
A
big
point
of
interoperability
also
is
sort
of
a
semantics
around
the
data
types.
G
Modeling
a
thing
was
a
set
of
objects
inside
that
thing
sort
of
as
an
encapsulation
and
those
collectively
create
the
set
of
affordances
that
that
thing
exposes
properties,
actions
and
events.
And
this
is
the
current
focus
of
one
data
model
in
the
SDF
languages.
To
be
able
to
express
these,
where
we
can
extend
things
and
sort
of
taking
a
little
bit
of
a
cue
from
industrial
controls
in
IEC,
61
499.
G
Basically,
we
can
create
a
model
of
a
function
block
using
the
same
affordances
properties,
actions
and
events
coming
in.
In
other
words,
it
subscribes
to
properties
or
it
reads
and
writes
properties.
It
can
have
actions
done
upon
its
or
like
function.
Calls
can
you
actions
on
other
objects,
function,
calls
and
it
can
receive
events,
and
it
can.
G
The
idea,
here
being
that
you
would
create
then
a
network
application
by
wiring
these
things
together,
almost
sort
of
in
a
graph.
You
can
imagine
that's
the
way,
60
what
$4.99
works
as
well,
so
the
properties
that
are
data
properties
from
one
function
can
go
to
another,
and
actions
can
be
invoked
from
one
function
block
to
another,
and
the
events
can
be
propagated
from
one
function
block
to
other,
and
then
you.
G
G
The
interesting
thing
is
this
is
just
an
abstract
model.
It
basically
allows
you
to
say
here's
a
function
and
here's
what
it
does
and
here's
what
its
inputs
and
outputs
are,
but
it
doesn't
really
say
how
it
works
on
a
network.
So
there's
the
idea
of
another
layer
in
the
system.
It's
protocol
binding.
We
defined
things
like
content
formats
and
payload
formats,
the
things
that
go
over
to
the
network.
They
go
at
one
at
the
time
or
involve
how
the
protocols
are
used,
pub/sub,
rest
protocols.
G
What
are
the
network
addresses
of
instances
of
thing
to
URLs,
so
you
can
actually
start
building
a
network
and
some
examples
of
this.
Are
these
like
a
w3c
thing
description,
which
provides
both
romantic
anchor
points
as
well
as
formats
and
protocol
descriptions
and,
of
course,
open,
API
or
swaggered
people
are
familiar
with
so
these
these
can
be
tools
that
can
map
the
model
to
a
particular
protocol
or
set
of
instances
on
the
network.
Oh,
it's
missing.
Of
course
you
know
security
considerations.
How
would
you
make
such
a
system
secure?
How
do
you
manage
it?
G
How
do
you
configure
discover
things
configure
things?
It's
more.
Maybe
configuration
than
discovery
hadn't,
build
these
application
networks
and
then
instrumentation
and
diagnostics.
How
do
you
identify
cycles
and
how
do
you
know
when
things
are
going
wrong
and
how
do
you
optimize
performance
things
of
that
nature?
G
So
you
know
what
we're
we're
we're
not
now
with
with
one
data
model
and
SDF
is
a
set
of
initial
deliverables
that
we
committed
to
to
deliver
back
to
the
participants
to
use
in
their
in
their
work,
but
also
to
standardized
SDF,
so
that
they
can
have
a
much
much
broader
impact.
And
then
we
have
queued
up
to
work
on
these
behavior
and
context
extensions.
We
talked
a
lot
about
behavior
and
then
you
have
context
is,
of
course,
being
able
to
say
what
these
things
influenced
in
the
real
world.
So
in
industrial
controls.
G
G
B
A
question
I
mean
I,
wanted
to
point
out
that
you
know
I
know
that
there's
some
intersection
between
folks
who
are
listening.
Obviously
you
know,
Ari
and
Carsten
are
very
involved
in
the
thing
to
thing
and
wishi
workshop
outcomes,
and
you
know
1d
I,
see
sort
of
one
DM
as
an
out
a
really
wonderful,
positive
outcome
of
that.
B
So
yes,
there's
some
cross-pollination,
but
when
Michael
told
me
that
he
was
thinking
about
this,
I
couldn't
help
but
think
of
some
of
the
work
going
on
with
named
function,
networks
and
with
rice,
the
you
know
the
ICN
implementation
that
you
know
Marshalls
compute
in
the
network,
in
addition
to
routing
data
by
name,
you
know
how
do
you
in
some
ways
invoke
these
functions
or
function
blocks
and
while
it's
not
necessarily
they're,
not
necessarily
thinking
about
the
linguistic
part
of
it
or
the
inner
operability,
part
of
it?
That
was
where
I
personally
saw
the
overlap.
B
So
I
was
really
excited
to
hear
that
much
like
what
was
thinking
about
this
and
and
thought
that
at
least
folks
involved
in
those
kinds
of
developer
concerns
of
how
do
you
begin
to
specify
what's
needed?
As
the
I/o
for
these
functions,
I
was
hoping
that
people
could
weigh
in
from
their
exact
rounds
of
experiences.
You
know
is,
is
the
is
this
direction
for
SDF,
something
that
we
can
the
joints
and
be
influenced
by
Carsten?
I
can
see
you
want
to
answer
this
question.
N
Yeah
it's
on
Tuesday.
Are
we
going
to
have
a
thinker
thing,
research
group
meeting
where
we
will
talk
about
one
DM
and
in
particular
we
will
have
some
introduction
into
the
data
model
specification
language
as
it
is
defined
now,
and
that
is
actually
something
that
is
sufficiently
crystallized
at
this
point
in
time
that
it
actually
could
move
out
of
research
in
into
the
IETF.
B
O
A
Yeah,
so
next
presentation
is
the
drafts
and
I
lost
lay
my
agenda,
so
things
are
not
going
well
here
hold
on.
H
Before
zero
one,
we
updated
that
couple
video
to
be
able
to
stay
lighter
fluid.
That's
you
know
to
buy
a
nice
former
colleague
and
another
person
partner.
You
can
work
on
and
I
just
want
to
give
a
quick
update
at
what
you've
changed.
Compared
to
you,
the
last
presentation,
the
first
version,
the
earlier
version
was
presented
in
Montreal,
tell.
H
Very
good
thanks
copies
from
the
introduction
we
kind
of
like
going
with
the
term
here
that
we
call
apps
and
it's
just
part
of
the
time
of
the
craft,
imma
focusing
very
much
in
the
craft
on
application,
sending
microsomes
sort
of
being
executed
in
the
support
of
the
network.
That's
kinda,
the
premise
of
the
traffic
and
the
airline
use
cases
and
research
challenges
for
the
vision,
and
you
can
see
this
is
because
the
next
one
please
I.
N
B
P
H
The
spot,
what
we've
changed
retains
his
lost
time
when
a
so
we
added
more.
These
cases,
section
3,
which
you
can
see
here
in
structure
and
we
put
the
requirements
which
were
before
scattered
in
these
cases,
include
a
separate
section
which
is
three
point.
Six
you've
had
some
minor
revisions
on
the
enemy
technologies
and
the
challenges
hobbies
in
very
serious.
No
second
boy,
always
in
the
clinic,
doesn't
I,
see
sure
the
substations
that
we
have
pulled
it
out.
H
I
think
four
or
five
at
the
moment,
but
be
planned
more
to
focus
on
there
in
the
next.
One
will
be
action
once
on
section
four,
but
that's
kind
of
a
change
there.
I
want
to
briefly
walk
through
the
food
use
cases,
and
then,
but
it
brings
me
slice
the
requirements.
If
you
have
them
that's
kind
of
like
it
because
was
only
a
traffic
update,
you
could
go
to
the
next
slide.
Please.
H
So
yeah
I'm
on
my
mobile
on
culture,
because
also
the
closer
the
drivers
of
the
lives
of
peripherals,
you
know,
might
be
surface
the
are
Maggie
few
of
you
are
they
have
a
number
of
dedicated
use
cases
in
that
area
where
these
micro
services
just
sounds
about
based
on
the
interaction
in
the
in
the
user
experience
and
could
be
really
a
number
of
mobile
apps
multi.
Viewing
experiences
was
our
first
one
that
we
started
with
our
user.
Gaming
is
another
one,
that's
really
quite
interesting.
H
We
also
have
a
demonstration
last
year
on
localized
to
his
experiences
where,
from
the
functionalities
being
transferred
again
to
an
edge
data
center.
Some
of
the
pain
points
that
we've
experienced
there,
which
we
then
discussed
separately
in
section
four,
is
the
leg
of
a
available
platforms
for
HTTP
microphones,
the
own
number
available,
but
still
there
there
there's
a
difference
between
them.
H
The
latency
that's
often
caused
the
very
in
particular,
and
that
implementations
when
every
agency
requests
is
essentially
encapsulated
with
a
TCP
reset
up,
which
contains
some
implementations
actually
as
well
as
the
chaining
over
at
that
is
being
caused
each
in
microservices
together
into
a
chain
of
interactions,
and
so
on.
The
typical
pain
point
to
go
to
the
next
one.
H
We
all
said
riveted
they
are
in
under
CV
needle
ask
how
to
increase
that
can
be
capabilities
as
one
goal.
So
you
just
you
know
you
just
get
more
power,
the
other
one.
It's
a
different
set
of
scenarios,
maybe
even
combined,
is
localized
data
and
reasoning,
and
that
in
particularly
in
you,
but
also
in
other
privacy
regimes,
allow
you
to
localize
the
reasoning
or
sensitive
errors,
and
sometimes
maybe
better
to
keep
data
reasoning
in
certain
execution
ponds
in
the
network.
Number
of
examples
that
we
Star
Wars
were
say
more
about.
H
The
localized
processing
is
more
about
synchrotron
processing
or
radar
live
application
for
topological
mapping,
but
then
also
others
very
well.
So
the
Privacy
Act
becomes
in
is
logical
image,
recognition,
but
feature
extraction
is
being
localized
and
only
be
at
replication.
Only
acts
over
the
features
rather
than
the
dedicated
raw
data,
only
again
that
paint
bonds
here
very
similar
latency,
as
well
as
a
chaining,
but
also
the
integration
of
rich
processing
endpoints
that
are
nowadays
not
necessarily
cost
hard.
H
They
are
things
like
a
base
station
is
a
reasoning,
essentially
a
reasoning
platform
that
you
could
use
in
a
in
a
in
a
better
environment
and,
and
these
processing
endpoints
are
very
often
very
proprietary,
still
go
to
the
next
life
thing
see
the
ends
of
the
network
level.
A
service
may
be
presented
as
a
use
case
in
there
as
well,
but
I
like
if
it
uses
multicast
opportunities
in
pathways,
forwarding
to
improve
on
the
distribution
within
the
CDN,
but
also
towards
customers.
H
On
the
right
hand,
side
didn't
see
one
customer
at
the
bottom
left
and
the
number
of
CDN
nodes
being
provided
in
various
customers
networks
and
obviously
the
driver
see
a
significant
increase
in
the
media
content.
So
you
to
have
more
more
content
and
pushed
about,
and
the
examples
are
the
obvious
ones
in
pvn
can
convert
reiji
utilizing
technology
like
5g
long
torso,
connective
over
nodes,
I
mean
they
in
the
field
and
open,
also
increasing
or
improving
on
existing
CDN
solution
and
fixed
axis.
These
are
the
tutelage
examples,
the
pain
points
officer.
H
There
is
the
efficiency
both
in
synchronization
in
the
back
end
of
the
front
wall,
which
means
between
the
SI
engine
to
the
user,
as
well
as
the
CD
and
server
cost
in
serving
of
the
content
and
obviously
also
the
considered
caused
by
an
efficient
path.
Length
of
passwords
and
the
DNS
redirection.
To
this
median
storage
capacity
is
another
pain
point
when,
particularly
when
you
want
to
trade
off
the
worst
capacity
against,
not
the
utilization
and
scenarios
where
you
use
storage
constrained
edge
nodes.
H
How
often
you
refresh
the
actual
edge
nodes
and
trade
off
in
a
visualization
against
the
storage
capacity,
put
a
couple
of
references
and
also
my
thing:
I'm,
not
sure
the
easier
research
report,
but
the
archives
papers
should
be
in
the
traffic
as
well
realize
today
that
are
probably
done
of
the
market
report
in
there.
This
Linux
the
loss
monthly
loss.
These
guys,
we
look
at
it
from
an
infrastructure
perspective,
and
that
is
you
know
the
coldest
compete
front
fabric
as
a
service,
and
that
is
built-in
margin.
H
Technology
data
center
light
connectivity
across
the
number
of
access
technologies
for
value-added
use,
cases
that
can
be
really
no
number
of
or
any
type
in.
The
data
center
like
after
being
implemented
over
the
action
completes
fabric
and
the
driver
series,
visible
infrastructures
and
application
agnostic,
but
it
utilizes
the
benefit
of
the
available
local
to
each
resources.
That's
kind
of
in
a
way
of
real
estate
player
can
be
more
local
than
maybe
somebody
offers
a
certain
advantage
in
terms
of
latency,
maybe
or
in
terms
of
localization
of
computation,
because
I
have
very
constrained
available
target.
H
The
number
of
players
I've
been
working
in
was
in
the
part
when
we
started
writing
the
class
that
work
very
much
in
the
data
center
interconnect
area
of
that
green
go
after
this
copper
via
the
state
by
the
name
on
here
in
the
identified,
has
to
do
with
technology.
Apology
changes
to
any
was
trending
informative
when
the
I
think
athlete
ended.
H
Encounters
in
area
were
into
marketing
for
in
also
affects
the
the
actual
complete
havoc
might
be
changing
because
of
certain
ability,
aspects
or
volatile
resources
that
are
being
utilized
from
the
far
edge
and
that
leads
to
dynamic
additional
resources
would
allow
us
to
really
ask
resources
in
a
dynamic.
You
know
you
know
bidding
way
to
the
actual
time
to
the
actual
company
traffic.
H
So
is
that
because
quantities
use
there's
a
little
bit
more
detail
in
the
in
the
craft
and
pull
down
a
number
of
Houston's,
if
you
go
to
the
next
slide
things
the
cover
very
clearly
so
I
haven't
got
a
list
of
them
all.
I
try
to
do
there,
but
I
ended
up
having
five
or
six
lights
and
my
research
at
each
slice
there.
They
have
very
various
but
cover
the
cup
of
service.
H
Routings
I
was
training
the
execution
paintings
of
packaging
synchronization,
so
they
all
link
back
to
the
actually
use
that
somebody
requirements,
obviously
of
light,
which
we
use
cases
there
and
then
being
used
to
the
first
in
in
order.
But
there
are
various
things
there
is
no
claim
of
them
being
exhaustive,
but
there
a
number
of
the
teased
out
what's
missing
is
now
that's
in
the
future
planning
your
piece
to
the
next
one
is
please
I.
H
Not
much
on
the
other
way,
I
can
actually
leave
this.
The
future
plans
they
actually
have
for
that
is
to
extend
no
turn.
Two
sections
for
one
of
the
backgrounds
there
is
is
that
I
changed
company.
My
affiliation
changed
in
me
in
the
trust,
that's
one
of
the
reasons
why
they
never
much
time
to
spend
on
the
research
challenges.
That's
the
note
that
in
the
next
version
of
the
draft
and
in
particular,
linking
them
more
clearly
to
the
requirement
to
enter
the
use
cases
we're
good
to
make
a
better
close
linkage
throughout
the
document.
H
Not
sure
this
will
all
income
in
the
next
one,
but
also
it
links
to
one
of
the
chart
writings
that
was
listed
before
of
I
mater.
They
used
to
outline
high-level
solutions
both
existing,
but
also
under
research,
and
developments
that
at
least
give
some
type
of
I
put
it
in
quotes.
Maybe
air
quotes?
They
are.
You
know,
what's
currently
available,
that
could
could
really
be
utilized
for
realizing
some
of
the
use
cases
and
then
potentially
even
lead
to
a
gap
analysis.
A
A
Q
Okay,
so
thank
you
and
hello
everyone.
My
name
is
nothing
and
I'm
a
PhD
student
from
every
tier
University
last
year,
my
colleague,
I,
couldn't
so
already
presented
a
draft
about
our
network.
Computing
can
generally
enhance
industrial
networks,
especially
regarding
processing,
and
we
also
see
a
lot
of
potential
for
a
networked
computing
to
benefit
security
and
privacy,
but
I
want
to
present
today
the.
Q
As
a
consequence,
we
need
to
retrofit
security
and
privacy
mechanisms
and
he
receive
a
potential
of
in
network
computing
to
do
so
without
significant
overhead
and
yeah.
More
precisely.
If
you
present,
we
propose
basic
protection
mechanisms
as
well
as
intrusion
and
anomaly
detection
implemented
in
networking
devices.
I.
Q
In
case
of,
for
example,
general-purpose
computers,
the
data
can
be
sent
between
those
networking
devices
or
to
receiver
yeah
and
a
protected
manner
again
yeah
without
unnoticed
axis
yeah.
You
can
just
pretend
so.
We
should
further
examine
the
opportunities,
as
well
as
the
interest
of
the
manufacturers
for
such
technology.
Q
Q
What
is
already
feasible
and
our
opinion
is
to
implement
authorization
and
authentication
mechanisms
ethically
advantage
of
a
networked
computing,
that
it
allows
to
make
elaborate
decisions
about
that.
A
package
should
be
forwarded
or
not
in
detail.
We
see
two
possible
approaches:
Nia,
first,
a
communication
partner
trying
to
connect
to
an
industrial
device
that
they
require
to
conduct
a
handshake
for
authorization
and
authentication
at
the
start
of
every
connection,
yeah
yeah.
Q
This
could
be
done
on
the
basis
of
passport
or
certificate
and
cryptographic
calculations
could
be
uploaded
to
the
control
plane,
as
they
are
only
needed
once
for
every
connection
would
be
then
enforced
by
the
networking
devices
without
further
processing
overhead.
The
idea
would
be
to
send
secret
tokens
with
every
packet,
which
are
then
checked
for
their
validity
by
the
networking
devices,
and
for
this
we
could,
for
example,
use
hash
shines
to
prevent
replay
attacks
and
simple
hashing
is
already
possible
within
existing
networking
hardware.
Q
In
fact,
in
that
rare
computing
is
ideal
for
enforcing
such
policies,
absolute
allows
flexible
filtering
at
line
rate
and
contrasts
existing
approaches.
For
example,
software-defined
networking
with
open
flow
lead
to
unacceptable
agencies,
in
that
rare
computing
can
be
also
used
to
consider
additional
information
like
contextual
parameters
like
the
time
of
day
or
even
packet
contents,
instead,
just
using
simple
protocol
hello,
yes,
first
proofs
of
concept
for
this
are
already
existing
by
other
researchers,
but
the
full
potential
of
remain
subject
to
future.
Q
Now,
having
named
those
basic
protection
mechanisms,
a
text
might
be
too
subtle
to
be
prevented
upfront.
However,
they
can
lead
to
noticeable
effects
in
the
long
run,
and,
yes,
ever
more
devices
might
act
for
it,
even
without
external
interference,
but
the
network
computing
can
help
again
to
detect
such
behavior,
and
the
advantage
is
that,
with
a
network
computing
become,
you
can
only
one
hand
use
flow
statistics
to
detect
anomalies
in
traffic
patterns.
Q
Q
Q
Multiple
opportunities
for
in
network
computing
to
efficiently
benefit
security
and
privacy.
By
this,
we
could
reduce
costs
for
additional
hardware
and
also
processing
overhead,
which
is
beneficial
for
a
time-sensitive
context
and
also
for
resource
constraint,
biases,
which
we
cannot
easily
upgrade
yeah
in
the
future.
You
want
to
examine
the
potential
in
detail.
I
shall.
Q
A
F
R
R
And
still
our
intention
of
the
draft
is
to
raise
questions.
So
we
some
of
them
might
be
a
little
bit
far-fetched,
but
we
think
they
are
worth
yeah
asking
and
yeah
so
that
we
all
have
in
mind
what
could
be
possible
things
that
one
can
think
about
and
in
the
following.
I
would
like
to
quickly
get
over
or
I
go
over
yeah
aspects
that
we
think
are
worth
mentioning
about.
R
Depends
on
whether
we
are
answer
the
previous
questions
in
a
certain
way,
so
the
first
first
thing
that
we
were
thinking
about
is
that
the
retransmissions
generally
are
based
on
the
end-to-end
principle
still.
So
this
means
that
the
sender
only
retransmits,
if
it
has
determined
that
the
receiver
didn't
get
the
original
message
and
then
both
the
sender
and
the
receiver
know
that
a
retransmission
is
incoming
or
that
a
package
it's
missing,
and
then
they
can
act
accordingly.
R
But
now
we
have
La
Pointe
elements
in
the
middle
and
they
now
also
somewhat
work
on
the
on
the
messages
that
are
transmitted
and
now.
The
question
that
we
are
raising
here
as
a
first
step,
is
whether
they
also
should
have
an
understanding
of
the
basic
retransmission
mechanism.
So
should
they
know
that
we
are
now
sending
we
transmissions
through
the
network
or
not?
And
if
we
answered
that
question
with
a
yes
would
then
be
to
build
this
understanding
based
on
the
existing
transport
mechanisms.
R
But
here
it
could
be
challenges
challenging
to
find
for
the
coin
elements
to
actually
identify
that
we
are
what
they
are
working
on
retransmissions
and
that's.
We
are
then
thinking
about
whether
we
could,
for
example,
then
have
like
dedicated
signals
for
the
coil
elements
that
they
can
more
easily
detect
that
a
retransmission
is
going
on,
and
especially
if
we
have
then
somewhat
of
a
coin
cave
transport.
This
could
then
be
more
easily
to
realize.
R
So
we're
not
talking
about
identifying
B
transmissions.
On
the
other
hand,
if
we
need
retransmissions,
then
packets
get
lost,
and
that's
the
logical
other
side
of
the
metal
is
to
ask
whether
coil
elements
should
also
be
able
to
find
out
that
a
packet
is
missing
so
that
they
can
then
include
that
into
their
computations
and,
for
example,
then
wait
with
computations
until
a
later
point
where
the
B
transmission
arrives
and
then
also
directly,
then
the
next
step,
if
they
are
able
to
identify
that.
R
R
But
if
we
now
think
of
loss
based
congestion
control
like
cubic,
for
example,
the
end
host
will
repeatedly
overload
our
coin
element.
If
our
quote
element
is
at
least
the
bottleneck
of
the
connection,
and
thus
we
overload
it
even
if
we
know
that
there
are
Hardware
or
computation
of
it.
And
thus
we
are
asking
whether
we
should
similar
to
flow
control
to
something
like
resource
reservation
in
in
advance
or
integrated
into
end-to-end
flow
control
like
so
that
we.
R
Know
in
advance
that
we
won't
overload
the
computation
or
capacities
that
we
have
in
the
network
or
future
plans,
I've
also
added
the
industrial
use
cases
graph
on
this
slide
transport
issues.
We
are
now
asking
you
whether
we
have
some
aspects
on
quest
that
should
we'll
be
raised
also,
if
some
of
the
problems
that
we
phrased
need
additional
clarification
so
feel
free
to
give
us
feedback
on
that
and
then,
regarding
the
industrial
use
cases
draft
there,
we
find
it
very
hard
to
get
hard
numbers
for
the
use
cases
so
under
requirements
that
are
there.
R
So
it
is
a
little
bit
difficult
to
further
advance
the
draft
and,
additionally,
the
draft
is,
as
the
initial
slide
of
Murray's
was
a
short
attached
to
the
milestone.
That
is
now
happening,
and
thus
we
will
also
raise
the
question
how
we
would
like
to
proceed
with
this
draft
in
the
future
and
yeah.
That's
it
for
my
site.
A
Okay,
again,
we're
really
getting
out
of
time.
So
we'll
move
discussions
on
this,
which
I
think
is
an
important
discussion
to
the
list.
We
have
you
know
I,
don't
mind
going
over
time.
If
people
have
have
a
you
know,
a
timing
issue,
please
leave
when
you
can.
We
have
still
two
presentations
I
think
we
could
go
about
10-15
minutes
over
time,
which
is
fine
for
me,
and
you
know
we
can
let
go
with
the
last
part.
So
we
have
presentations
next
day.
A
Spain
from
China
Mobile
could
not
present
last
time
because
we
were
out
of
time
so
ping
if
you
could
present
and
try
to
go
fast.
So
we
have
time
also
to
allow
Eve
to
present
the
update
on
her
draft
because
she
was
also
thrown
out
because
of
no
time
last
time.
So
any
problem
to
the
management
issues,
I
think
yeah
we're
popular.
So
please
ping
again,
please
go
fast
because
you
know
you
yeah,
please.
S
We
consider
that
that
is
not
for
all
of
the
surveys
and
some
of
the
some
of
the
new
service
like
ocean
control
in
the
manufacturing
and
some
electric
surveys.
We
require
one
to
ten
microseconds,
bumps
and
and
for
the
common
currency
we
say
there
will
be
numbers
of
committee,
knows
deploying
in
the
network
or
computing
functions
in
the
create
in
the
network
device.
So
it
will
bring
a
great
challenge
to
the
network
connection
and
addressing
information
requirements
which
are
not
not
new
in
this
version.
So
we
just.
S
Keeps
giving
this
and
for
this
version,
if
something
new
is
that
add
some
computing,
so
one
is
deployment
a
new
sink.
If
some
computing
task
in
the
network
expanded
the
employee
needs
to
consider
about
what
kinds
of
shapes
are
the
will
to
be
deployed,
on
the
other
hand,
and
different
kinds
of
computing,
quite
different
kind
of
chips,
and
so.
S
S
When
they
are
completing
tasks
to
be
ready,
the
network
is
a
male
locate
results
according
to
the
needs
of
the
application,
and
the
gathering
is
mentioned.
I
was
mentioned
in
the
last-
is
about
scheduling
strategy,
because
the
service
node
may
be
changed
when
the
surveys
are
being
rather
and
for
the
management.
Haven't
we
consider
more
about
a
joint
of
optimization.
J
J
S
S
K
S
D
S
A
B
B
B
As
fast
as
summer
as
I
can
okay,
this
is
a
draft
that
we
began
in
about
a
year
or
so
ago
and
have
had
several
updates.
It's
actually
on
Version
three,
but
we
were
on
route
to
version
four.
So
this
is
a
summary,
a
very
brief
summary,
since
we
really
are
running
out
of
time
to
tell
you
what's
changed
my
colleagues
mike
mcbride
dirk
richer
and
carlos
Barnardo's.
B
What
I
mean
is,
in
the
context
of
computing
in
the
network,
clearly
there's
data
that
you
need
to
marshal
as
input
to
these
computations
and
often
these
computations
or
transformations
or
analytics
produce
output,
and
so
there
are
a
breezes
and
turn
many
other
questions
about.
Where
does
it
come
from?
Where
should
it
go
afterwards?
Should
it
be
cached,
should
it
flows
somewhere
else
or
migrating
somewhere
else,
and
obviously
there's
this
very
close
dance
with?
B
The
other
important
differentiation
is
that
that
we
realize
that
data
discovery
is
only
half
of
the
problem.
I
mean
there's
the
you
know
after
you
generate
the
data,
maybe
there's
also
a
placement
problem,
and
so
at
so
in
this
description
of
the
straf
tone
where
it
needs
to
go
next,
is
to
really
clarify
the
lifecycle
of
the
data.
How
discovery
ties
in
with
the
broader
data
management
problem.
B
We
clarified
all
the
different
kinds
of
data
that
we're
talking
about
everything
from
you
know,
streaming,
data
to
control,
data
and
metadata
to
you,
know,
data
being
not
just
bags
of
bits
but
functions
themselves
and
services,
and
we
tried
to
clarify
also
many
of
the
use
cases
that
we
feel
you
know
fall
under
this
umbrella
and
in
particular
we
moved
some
of
the
service
function.
Chaining
discussion
there,
as
I
said
we're
really
running
out
of
time.
B
There
are
a
lot
of
things
we
did
all
the
things
with
checkmarks,
where
we
received
some
terrific
and
very
detailed
feedback.
So
thank
you
to
the
list
and
David
Iran
in
particular,
but,
as
I
said,
I
think
going
forward.
This
you
know.
Do
we
want
to
write
somebody?
I
just
saw
the
route
that
Phil
posted
because
of
its
name.
It
doesn't
appear
as
affiliated
with
working
research
groups,
so
in
the
future
going
forward.
We're
gonna
will
name
it
accordingly.
But
but
there's
this,
how
broad
a
problem
do
we
want
to
solve
here?
B
And,
furthermore,
you
know
like
everything
else:
we
really
need
more
thoughtful
security
section
and,
in
particular
around
privacy.
We've
met
a
lot
of
additional
minor
details:
it's
not
minor,
but
you
know
less
key
inputs
around.
You
know
just
sort
of
finessing.
What
is
it
that
we
are
trying
to
solve
here,
and
in
fact
this
draft
is
a
problem
statement
draft,
and
so
the
question
going
forward
is
I
will
just
jump
to
that
in
the
interest
of
time.
B
Do
we
adopt
this
and
rename
it
accordingly?
So
people
can
find
it?
How
do
we
do?
We
want
to
scope
this?
So
it's
not
merely
about
data
discovery,
but
really
the
lifecycle
management
management,
that's
required
to
support
computing,
the
network-
and
this
is
where
this
ties
in
with
the
other
presentations
that
were
made
today.
B
What
is
their
current
maturity
to
support
and
are
they
suitable
to
support
finding
data
amongst
other
kinds
of
things
in
the
Internet
of
Things
and
furthermore,
how
are
other
kinds
of
compute
in
the
network
types
of
systems?
Whether
those
are
you
know,
orchestration
through
containers
or
the
solution
of
finding
data
through
using
you,
know,
file
systems
or
distributed
hash
tables,
and
things
like
that.
B
How
did
those
fare
in
terms
of
support
for
coin
so
really
we're
at
the
stage
where
this
draft
is
needs
to
evolve
towards
really
pinpointing
the
gaps
of
existing
solutions,
and
some
of
that
could
be
helped
by
being
more
articulate
around
the
requirements
that
are
needed.
We
certainly
could
use
the
help
of
somebody
who
has
an
interest
in
the
security
facets
and
by
security.
I
mean
spirit
of
privacy
and
Trust.
B
A
B
Absolutely
I
mean
I
think
that
that's
a
certainly
a
coin
question
and
you
know
maybe
what
would
help
answer
that
question
is
if
some
of
the
data
that
we're
discovering
is
the
meta
information
about
edges,
about
their
capabilities
and
their
constraints,
and
that
those
that
that
metadata
is
how
how
current
is
it?
How
off
today
is
it?
That
would
certainly
be
the
kind
of
data
that
contribute
to
answering
it.
You
know,
which
is
the
right
edge,
compute
on
which
to
place
this
computation.
A
A
When
we're
supposed
to
have
three
hours
in
Vancouver,
so
we
managed
to
squeeze
everything
in
two
hours,
which
is
pretty
good
okay.
So
what
I
was
saying
again?
We
still
have
29
participants
on
the
on
the
call
we
had
a
maximum
of
45,
so
this
was
kind
of
cool.
So
thank
you
very
much.
I.
Think
in
the
the
list
that
I
had
for
for
the
future
was.
Was
this
preparation
for
for
Madrid
and
we'll
wait
for
the
the
leadership
of
both
the
IETF
and
IOT
F
to
see
how
this
will
materialize?
A
I
could
see
a
few
new
names
on
the
list
which
shows
our
community
is
expanding.
Thank
you
very
much
and
Eve
released
a
good
question
on
one
of
her
last
lights,
which
is:
should
we
start
having
research
group
items
and
I
think
that
should
be
for
sure
on
the
next
agenda,
because
we
have
a
number
of
drafts
that
are
that
have
achievers
at
the
level
of
maturity,
I
would
say
the
2a
to
Dirk's
Eve
and
maybe
some
of
the
work
from
Arkin.
A
So
let's,
let's
keep
that
in
mind
for
the
next
meeting
I
think
we
must.
We
may
see
a
lot
of
our
a
lot
of
ourselves
on
on
virtual
for
a
while.
So
let's
get
used
to
that
and
hopefully
at
one
point
we'll
be
able
to
meet
again.
Thank
you
very
much
for
those
of
you
in
California
have
a
great
day
for
those
you
in
Europe
have
a
great
evening
and
for
us
in
the
middle
well,
we'll
still
have
a
whole
afternoon.
Thank
you
so
very
much.
Thank
you
too.