►
From YouTube: IETF106-COINRG-20191122-1000
Description
COINRG meeting session at IETF106
2019/11/22 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
A
A
B
B
B
B
This
is
the
goal
of
the
RER
TF
I.
Think
it's
important
I
think
it's
important
to
stress
at
every
meeting
that
we
are
not
doing
standards.
We
are
doing
research.
We
are
fostering
research.
We
are
fostering
the
development
of
research
communities
in
this
case
about
this
idea
of
distributing
computing
and
storage
and
decision-making
throughout
the
network
and
again
we
are
never
going
to
do
a
standard
here,
but
we're
going
to
have
a
lot
of
information
and
potentially
experimental
RFC's.
B
So
again,
it's
what
I
said
we
want
to
foster
the
research
we
want
to
do.
Look
at
architectures
protocols
and
real-world
case
and,
like
I,
said
earlier
this
week
we
have,
as
the
chair
is
a
little
bit
of
the
three,
the
three
things:
I'm,
an
architecture
eves
dozen
architecture
and
protocols
and
obviously
Jeffrey
becomes
he
comes
from
industry,
has
a
lot
of
real
world
use
cases
and
stuff.
So
this
is
our
agenda.
B
It's
extremely
tight
and
I
am
very,
very
sorry
to
all
the
presenters
who
got
their
presentations
squeezed
because
of
time,
and
we
promise
that
in
Vancouver
we're
going
to
ask
for
more
time
so
that,
as
a
research
group,
we
have
more
time
also
to
dwell
into
the
research
aspects
that
a
lot
of
time
need
a
little
bit
more
details.
So
I'm
very
again,
I'm
very
sorry.
We
had
two
hours
and
I
should
have
asked
for
more
or
we
should
have
asked
for
more,
but
we're
going
to
change
that.
B
We
have
and
again
following
some
discussion
that
we
had
at
the
interim
that
this
is
not
just
about
drafts
and
drafts
presentation.
So
we
decided
to
cut
to
do
two
sets
of
presentations,
what
we
call
the
research
and
the
research
update
presentations
and
then
all
the
the
drafts
presentation
after
if
anybody
has
comments
about
the
agenda,
okay,
so
we're
going
to
do
this
as
a
as
a
group
here
and
you.
C
Can
sure
we
were
invited
into
the
IAB
earlier
this
week
on
Colin's
request
to
give
them
an
update
on
how
we're
doing
as
a
research
group
and
as
a
result,
Mauritius
a
gave
a
terrific
presentation
and
we
got
lots
of
feedback
from
the
IAB
we've
been
around
now
for
a
year,
so
we've
had
three.
This
is
our
third
IETF,
and
even
before
that
there
were
some
side
meetings
and
so
forth,
but
we
thought
we'd
share
with
you.
The
feedback
that
we
received
for
those
of
you
on
the
mailing
list.
C
You've
seen
the
really
detailed
comments
that
came
back
and
in
here
on
this
slide
are
the
main
comments
and
the
biggest
comment
that
we
got
almost
from
the
get-go
was
have
you,
and
should
you
give
greater
consideration
to
issues
around
security
and
privacy
and
Trust,
given
that
we
are
approaching
one
a
new
architecture
into
a
more
distributed
context
within
which
we're
doing
this
work?
And
so
we
got
comments
like
you
know,
to
make
this
work
relevant.
C
A
B
Decided
to
do
this
a
bit
panel
to
panel
style,
but
okay.
So,
let's
change
the
panel
person
I
saw
I.
Think
I
saw
yes,
I
made
the
presentation
and
we
got
a
lot
of
comments,
but
they
were
all
in
the
same,
the
same
direction
of
security,
which
is
huge,
but
at
the
same
time
I
think
which
is
good
for
this
group
and
again
thank
you
for
all
of
you
guys
who
participate
in
last
year.
We
didn't
get
anybody
who
said
this
is
like
a
stupid
idea,
and
it
should
never
come
to
the
IRT
f.
B
So
you
know,
at
least
at
this
point
we
were
very
happy
I.
Think.
However,
it
is
a
call
to
action
to
the
whole
group
and
to
everybody
who
presented
papers
or
presented,
who
did
some
drafts
and
even
for
some
of
our
hackathon
participants
and
I
see
them
around.
Here
is
just
maybe
revisit
what
we've
been
doing
while
having
the
security
point
of
view
inside
I.
Don't
think
or
could
correct
me
if
you
disagree,
but
I
don't
think
it
means
to
change
anything
now
or
I.
B
Don't
think
it
means
to
change
things
in
a
very
different
way.
It
just
means
that
you
know
when
you
I'll
get
the
point
of
packet
filtering.
For
example,
if
I
do
packet
filtering
on
metadata,
what
is
that
metadata
it's
in
an
encrypted
system?
What
does
it
mean?
What
do
I
want
to
expose
to
the
filtering
or
to
the
inside
processing?
Just
maybe
rethink
all
of
this
with
this
security
and
this
privacy
aspect
in
mind.
B
C
F
by
the
way
in
French,
7
Eve
are
the
same
person.
In
any
case,
it's
ok.
The
the
other
things
I
wanted
to
tell
you
about
are
some
of
the
other
things
that
seem
to
resonate
with
the
IAB,
the
first
of
which
was
that,
as
we
start
to
talk
about
this
trend
towards
edge
computing,
and
we
talked
about,
what's
the
relevance
to
the
IETF,
for
example,
there
was
some
excitement
saying
yes,
please
one
way
to
differentiate
ourselves
from
all
the
other
efforts
out.
There
is
to
tie
it
back
to
what's
going
on
in
the
internet.
B
Actually,
after
we
talk
to
them,
it
made
me
think
of
some
kind
of
imagery
for
it
as
the
same
that's
right
now
or
actually
has
been
doing
IETF
for
so
long.
So
I,
don't
really
remember
when
I
started
coming,
but
it
was
more
like
some
kind
of
you
know,
it
was
more
like
a
telephone
network
in
the
way
that
you
had
these
routers
that
were
very
closed
and
they
were
connected
by
more
or
less
big
connectivity
network.
B
You
know,
networks
and
cables
and
stuff
and
I
was
thinking
after
we
had
the
IAB
that
what
we're
doing
right
now
is
more
taking
like
a
computer
board,
breaking
it
in
pieces,
throw
the
pieces
around
and
connect
them
and
then
figure
out
how
we
can
make
that
computer
board
work
in
a
distributed
and
we're
talking,
but
no
actually
I
have
my
timer.
We
still
have
one
minute.
C
D
C
C
B
B
So
hackathon
summary
you
want
to
talk
about
it.
Ok,
so
people
know
where
the
hackathon
please
raise
your
hands.
I
see
a
few
around.
We
had
a
hackathon
on
I'm
talking
too
much
anyway,
we
have
a
hackathon
on
Saturday
and
Sunday
and
we're
still
embryonic
and
what
we
want
to
do
with
this.
So
we
did
mainly
some
stuff
to
familiarize
ourselves
with
data
plain
programming,
but
we
were
so
lucky.
We
ended
up
getting
a
bunch
of
people
who
actually
had
projects,
and
we
did
that.
B
The
first
idea
was
to
continue
what
we
had
started:
the
Montreal
Impact
filtering,
but
then
we
had
again
incredible.
People
came
with
projects
and
the
idea
that
we
have
and
we're
going
to
discuss
it
in
the
in
an
interim
that
we
plan
to
do
later,
actually
how
to
maybe
have
a
common
project
and
your
Eve
and
not
have
and
Eva
started
to
actually
put
some
words
into
this,
so
we're
going
to
share
that
to
the
list.
So
this
is
people
working.
We
had
obviously
the
tutorials
for
people
who
didn't
know
what
they
were
doing.
C
Quick
drafts
update,
so
we've
got
actually
a
total
of
eight.
These
are
the
first
four
we've
got
them
listed
in
chronological
order,
you're
gonna
be
hearing
today.
The
first
three
that
are
listed
are
ones
that
were
not
updated
and
the
the
bottom
one
also
is
not
going
to
have
representation.
Today
it
was
presented
last
time,
but
they
in
their
work.
C
They
updated
their
section
on
application,
packaging
and
programming
frameworks
for
decomposing
at
run
time,
and
they
were
interestingly
also
cited
that
one
of
the
outcomes
that
they're
hoping
from
their
draft
is
just
basically
a
research
roadmap
the.
How
do
we
go
forward?
Okay,
these
are
four
others
that
we
have
and
although
we
will
be
hearing
from
the
industrial
use
case,
they
updated
their
their
draft,
the
in
the
security
considerations
area
and
also
with
for
traffic
filters.
C
We
are
going
to
hear
from
them
today
for
the
Transfer
Protocol
issues
and
we
are
also
hearing
from
the
other
two,
so
we
will
wait
till
later
in
the
session.
Some
points
to
to
note
the
fact
that
we've
got
eight
internet
drops
actually
stood
us
pretty
well
with
the
IAB
that
there
are
people
interested
in
this
topic,
and
it's
our
intention
that
if
we
get
progressed
from
a
proposed
RG
to
an
actual
RG
that
we
will
look
at
the
most
mature
of
these
try
to
advance
a
couple
of
these
to
be
taken
in.
C
B
Okay,
so
we
can
start
with
presentations
without
further
delay,
and
actually
this
is
the
same
advice
we
gave
in
Montreal
is
actually
when
you
present
scan
you
it.
It
fits
it
clearly
in
your
slide,
it's
great
if
it's
not
well,
maybe
just
highlight
how
you
relate
to
this
chart,
to
our
Charter
and
to
the
cloud
to
edge
continuum,
and
the
first
presentation
is
York.
F
Thanks
so
I
just
realized
that
I'm
doubly
wrong
here.
First
of
all,
I'm
not
in
the
foot.
This
is
not
in
the
focus
of
this
research
group.
Apparently
it's
not
core
to
edge,
is
probably
edge
or
falling
off
the
edge
and
then
I
don't
have
anything
on
privacy,
security
and
Trust.
So
maybe
this
is
yet
to
come.
Still.
I
want
to
talk
about
two
different
bits
in
this.
This.
F
On
the
one
hand,
so
this
means
that
we
are
not
looking
at
generic
VMs
or
rec
servers
that
have
arbitrary
compute
capabilities,
but
we
are
looking
at
specific
devices
that
have
certain
capabilities
that
others
might
not
have,
which
also
means
that
they
are
usually
harder
to
scale.
So
you
just
can't
put
up,
there's
no
point
in
putting
up
two
hundred
or
a
thousand
temperature
sensors
next
to
each
other
when
you're
also
talking
about
mobile
users,
which
means
that
we
don't
do
orchestration
according
to
RT
T,
minimization
or
over
a
global
load
balancing.
F
But
there
is
a
certain
locality
relevant
here,
so
if
I'm
trying
to
control
temperature
in
this
room,
luckily,
this
morning
it
doesn't
seem
to
be
necessary
because
the
system
seems
to
be
working,
then
I
want
to
interact
with
what's
around
me,
not
with
something
that's
at
the
other
end
of
the
city
or
in
a
different
room.
And
thirdly,
we
are
looking
at
relatively
fine
grain
function
of
decom
position
and
in
40,
if
I'm
coughing
a
bit.
This
is
the
impact
of
this
air
conditioners.
I'm.
F
So
we
like
mobile
code,
so
I
have
two
examples
and
that
follow
these
that
follow
this
idea.
One
is
a
lure
based
mobile
code,
execution,
environment
and
the
second
one
is
a
trigger
action
framework
leveraging
ble
beacons.
Both
are
client
driven
in
that,
whatever
is
being
programmed
comes
from,
the
user
device
comes
from
the
users.
We
don't
interact
with
monster
things,
but
we
interact
with
ESP
32
microcontrollers
that
are
sufficiently
lightweight
cheap
and
can
be.
F
F
Let
me
quickly
start
with
number
one.
So
this
Luo
based
more
a
coordinate
execution
environment.
You
can
see
this
monster
picture
there.
There
is
a
small
note
that
looks
like
an
a
smartphone
moving
along
a
portrait
line
from
the
left
to
the
right,
and
it's
that
the
fundamental
idea
is
that
it's
going
to
pick
up
signals
from
devices
around
it,
including
their
capabilities
and
then
makes
use
of
the
devices
that
are
currently
in
reach.
F
These
individual
notes
that
the
mobile
device
interacts
with
have
a
certain
architecture.
I've
already
mentioned
that
this
is
an
ESP
microcontroller.
So
this
thing
runs
at
the
bottom
and
ESP.
Then
we
have
an
arthouse
on
top
of
that.
On
top
of
this
again,
we
have
a
bunch
of
C
based
drivers
bindings
to
the
newer
language
and
then
our
execution
framework
that
gives
us
the
blue
box
on
the
left-hand
side
and
that
provides
drivers
to
interface,
to
actuators
and
to
sensors.
F
Turning
on
lights,
measuring
temperature,
as
simple
examples
has
interfaces
to
local
storage
and
has
interfaces
to
network,
and
so
that
these
things
can
talk
to
each
other.
So
fundamentally,
this
is.
We
are
using
Lua
process
VMs
days.
They
offer
a
generic
code
execution
platforms.
We
send
our
Lua
scripts
strings
attached
with
metadata
of
what
kind
of
resources
they
are
going
to
require
and
so
that
the
device
can
actually
make
a
sensible
decision,
whether
it
can
sensibly
execute
that
code
and
whether
it
has
all
the
necessary
resources.
F
The
mobile
the
node
itself
can
then
decide
whether
it
has
the
necessary
capabilities,
and
it
has
enough
resource
in
terms
of
compute
power
available
at
the
moment
and
can
either
decide
to
refuse
execution
or
accept
that
script.
The
instantiation
then
happens
on
the
executing
node
and
at
some
point
the
mobile
can
collect
the
results.
We
support
two
different
modes
of
operation.
One
can
be
one
short
interactions,
fetching
data,
the
other
one
can
be
instantiation
of
code
that
runs
in
the
background
and,
for
example,
triggers
updates
every
K
seconds.
F
So
this
whole
thing
looks
like
then
two-stage
process,
but
on
the
one
hand
we
have
a
script
collection
and
distribution.
Networking
interface
that
put
scripts
into
a
local
@q,
the
weather
from
which
the
runtime
environment
picks
what
they
can
execute
next,
providing
some
kind
of
serialization,
but
there's
also
some
degree
of
parallelism
supported
no.
F
Not
too
bad,
and
so
we
try
we
built
this
tried
this
out
got
roughly
the
idea
that
usually
at
least
with
the
fairly
poorly
optimized
Wi-Fi
connectivity,
the
slowest
part
at
the
moment
is
actually
discovery
and
code
instantiation.
That
gets
a
bit
better.
If
you
run
much
if
you
transfer
multiple
scripts
in
one
run,
and
but
the
entire
thing
is
sufficiently
lightweight
that
we
figured
by
by
extrapolation
that
a
single
of
these
little
micro
controllers
can
support
something
like
a
hundred
user
devices.
F
The
second
part,
so
this
was
point-to-point
communication,
mobile,
node
talks
to
an
individual
in
structure,
node
and
then
offload
something,
and
then
programs,
then
one
by
one.
The
second
part
is
a
more
bus
based
approach
where
we
essentially
have
built
distributed.
F
If
this,
then
that
trigger
action
framework
on
the
left
hand
side,
you
see
trigger
drivers
which
essentially
can
consider
to
be
sensors,
Hardware
sensors
that
generate
signals
whenever
a
certain
reading
temperature,
whatever
on
the
right
hand,
side,
we
have
extra
action
drivers
which
are
the
equivalent
of
doing
something
and
in
the
middle
we
have
two
blocks.
One
is
about
something
like
a
boolean
circuit
that
can
express
sophisticated.
F
F
That's
roughly
the
the
overall
architecture
in
terms
of
the
which
gives
us
a
similar
in
terms
of
the
network
compute
we
have
essentially
distributed
triggers
and
actions
we
have.
Our
code
is
the
program
logic
that
we
distribute
function.
Properties
events
are
essentially
global,
uni,
unique
Device,
Identifier
us
and
then
instance
IDs
to
refer
to
specific
instances
as
well
as
definition
IDs,
to
provide
more
complex
functions.
So
we
can
actually
provide
some
abstraction
in
terms
of
programming.
F
We
utilize
ble
beacons
as
a
bus
system,
so
this
is
actually
essentially
used
for
everything.
We
use
this
to
discover
nearby,
divide
nearby
devices
and
their
capabilities
to
learn
about
what
features
they
can
do
to
spread
rules
and
all
these
little
programs
and
then
once
those
up
to
spread
and
load
it
to
also
execute
the
respective
triggers
and
have
the
triggers
flow
across
the
network.
F
So
this
comes
back
to
my
mama
zo
says
mentioning
breaking
up
a
board
of
a
computer
and
having
things
flow
back
back
and
forth
between
the
individual
bits
and
pieces
some
extent.
This
is
a
bit
on
the
extreme
savvy
because
we
only
move
computation,
no
data
at
all,
everything
that
the
only
thing
that
gets
moved
is
individual
trigger
signals.
F
There
are
a
bunch
of
protocol
messages
they
are
sufficiently
squeezed
so
that
they
can
be
and
efficiently
represented
to
the
so
that
they
can
fit
into
Bluetooth,
low-energy
beacons,
I'm
not
going
to
go
through
those
in
detail.
In
the
end,
you
get
something
like
an
model
where
you
have
a
bunch
of
sensors
in
the
environment,
you
can
have
multiple
mobile
nodes
that
are
running
around.
F
They
interact
with
these
find
out
which
capabilities
they
have
can
then
spread
triggers
rules
and
actions,
and
these
little
programs
and
thereby
actually
connect
those
entities
together
and
then
go
away,
and
these
things
can
talk.
Content
can
continue
talking
to
each
other,
em
self
orchestrating.
Then,
then,
in
the
end,
according
to
what
the
Mobile's
devices
could
program
them
to
do-
and
there
are
many
interesting
little
issues
here
in
terms
of
how
do
you
deal
with
conflicting
rules?
What
do
you
do
if
you
have
20
different
users
in
parallel
and
they
might
wanna?
F
They
want
to
do
this
fairness
issues
to
be
discussed
and
so
on.
So
we
are
currently
exploring
and
the
the
basic
invocation
principles
these
other
systems
and
operations
issues
are
on
our
are
on
our
to-do
list,
but
not
all
of
them
have
obviously
been
solved
at
that
stage,
but
we
have
running
prototypes
for
this
as
well
again
on
an
ESP
32.
So
putting
this
briefly
into
the
context
of
the
working
group
and
I'm
going
to
skip
through
most
of
the
remaining
slides
in
the
interest
of
time,
but
I
want
to.
F
This
is
also
something
one
can
easily
read
up
in
the
end.
Originally
I
had
five
minutes
more,
so
forgive
me
our
take
is
that
we
have
roughly
five
steps
when
it
comes
to
a
network
compute
operations.
What
we
need
to
define
what
kind
of
functions
we
have,
what
kind
of
properties
there
so
that
they
have
so
that
they
become
identifiable
next,
we
need
to
be
able
to
discover
those
beers
in
a
local
environment,
as
we
showed
or
in
a
network
environment
using
any
casting
or
whatever.
F
So
this
is
this
is
where
we
have
our
little
instance
of
functions
called
G
and
next
to
next
to
discovery,
and
then
an
Orchestrator,
a
client
or
whoever
I
would
be
responsible
for
picking
which
of
these
functions
should
be
executing
a
particular
program
or
a
particular
workflow
at
a
given
point
in
time,
and
then,
if
we
have
multiple
of
these,
they
need
to
be
orchestrated
and
linked
up
and
then,
in
the
end,
we
have
an
execution
step
in
which
those
different
code
needs
to
be
transferred.
Execution
flow
needs
to
be
transferred,
and
so
on.
F
I
have
individual
ones.
For
each
of
these
different
steps.
For
the
two
examples
we
will
probably
fold
some
of
this
into
our
architecture
draft
or
something
similar
in
the
end.
Let
me
conclude
this
short
presentation
with,
so
this
was
an
example
for
in
network
computing
for
broadcast
networks.
Of
course,
discovery,
and
things
like
this
can
also
be
extended
to
go
beyond
simple
broadcast
effects.
This
gives
a
gave
us
a
good
starting
point
to
build
stuff
that
works
purely
locally,
but
that
is,
of
course,
not
restricted
to
this.
F
There
are
two
interesting
metal
aspects:
I
want
to
rise,
one
interesting
bit,
that's
worthwhile,
looking
at
in
future
workers
in,
and
especially
in
the
second
case,
but
also
in
the
first
one.
We
pushed
control
explicitly
into
the
network
and
then
the
user
can
go
away
and
at
the
let
work,
autonomously
act
as
a
computer
and
do
what
it
is
supposed,
what
we
would
want
it
to
do
and
which
probably
comes
closer
to
running
a
demon
process
or
some
some
background
thing
in
the
end.
F
That,
of
course,
brings
all
kinds
of
operations
and
management
issues
that
would
need
to
be
considered
in
the
future,
and
then
we
have
a
similar
question
that
the
IOT
folks,
when
it
comes
to
data
semantics,
have
been
discussing
at
length
that
we
also
need
to
figure
out
how
to
describe
semantics
of
api's
with
signatures
and
and
the
like.
In
order
to
allow
this
composability
across
different
functions
inside
the
network,
and
that's
it.
We.
G
F
In
in
these
cases,
the
moment
we
distribute
our
own
scripts,
you
don't
you
get
an
implicit
name.
I
mean
roping,
something
on
on
a
given
note.
So
this
is
their
this
capability
driven
I'm,
not
naming
functions,
I'm
looking
for
device,
so
this
is
essentially
attribute
based
addressing
that
we
are
doing
here
so.
F
Given
that
we
have
e
in
the
in
the
specific
temperature
in
the
specific
trigger
action
case,
we
have
a
class
of
temperature
drivers,
so
they
have
a
common
identifier.
They
all
provide
the
same
kind
of
data
object
as
an
output,
so
you
would
essentially
find
which
devices
offer
this
kind
of
a
sense
this
this
kind
of
a
feature
by
comparing
this.
This
is
right
now,
in
this
specific
case,
it's
numeric
IDs,
simply
because
we
need
to
be
concise
and
our
ble
beacons
don't
give
enough
space
for
having
longer
names.
E
Good
morning
I'm
Alessandra
bossy
and
with
Mary
Jo,
we
started
some
work
for
vertical
agriculture
and
agriculture
food
at
zero
and
basically
how
to
develop
reference
architecture.
I
mean
for
forties
field.
Now,
first
of
all,
I
mean
I've
been
working
on
IOT
for
twelve
years
since
2007,
and
one
of
the
main
problems,
I
always
found
is
a
to
agree
on
what
IOT
is
quite
sure.
Even
in
this
room,
I
mean
if
everybody
sells
down
writing
their
definition
of
forty
I
mean
we
probably
be
some
a
strong
disagreement,
but
not
alignment.
E
E
Even
a
digital
one,
I
mean
basically
what
experiences
is
augmented
entity
which
is
basically
made
by
the
real
and
the
virtual
one
through
services
which
are
actually
sitting
on
resources,
resource
or
both
hardware
and
software,
and
this
resource
server
may
can
be
on
device
itself
can
be
in
the
cloud
can
be
anywhere
now.
I
don't
want
to
convert
to
my
religion
of
IOT.
What
I'm
telling
you
is
that
what
I'm,
starting
from.
E
And
for
written
reference,
architectures
should
be
easier,
but
just
just
to
make
sure
reference.
Architecture
is
basically
a
reference
for
building
compliant.
Iot
architecture
is
a
blueprint
basically
and
in
in
a
reference
architecture.
There
are
views
and
perspectives,
I
mean
performance,
security,
scalability
and
so
on
and
so
forth.
Then
all
these
different
views
I
mean
needs
to
basically
be
put
together
in
order
to
find
something
which
is
basically
the
local
optimum
I
mean
for
for
being
a
real
architecture.
E
In
this
case,
fusion
and
the
perspectives
are
used
by
the
definition
and
the
standard
definition
of
them
and
let's
not
go
further,
because
otherwise
I
mean
ten
minutes
will
be
spent
just
on
on
definitions,
why
we
were
looking
at
agriculture
in
the
first
place.
Agriculture
is
I,
wouldn't
say
a
very
laid
down
domain,
but
for
sure
is
not
the
most
modern
one,
and
that
is
huge
amount
of
improvement.
I
mean
they
can
be
done.
When
you
really
go
in
the
field,
I
mean
and
you're
working
with
people.
E
E
E
E
Devices
are
out
there,
so
I
mean
you
know
you
cannot
control
them,
I
mean
with
with
humidity
with
temperature,
with
chemicals
and
so
on,
communication.
I.
Don't
have
to
explain
here
what
it
is:
cells
are
properties
which
is
basically
autonomic
properties
or
or
artificial
intelligence,
and
so
on.
Identification
again
is
something
I'll
have
to
explain
really
here:
security
again
packaging,
which
is
basically
how
to
put
devices
into
non-standard
material
that
can
be
food,
I
mean
there
can
be
a
metal
they
can
be
whatever
energy
considerably.
E
Most
of
these
devices
I
mean
our
battery
base,
or
how
do
you
develop
are
almost
zero
entropy
systems
and
quality,
because
of
course,
I
mean
you
don't
want.
Devices
are
giving
like,
which
have
a
result.
I
mean
all
the
time,
and
you
need
to
have
some
sort
of
security.
I
mean
that
the
device
Amin
is
projecting
something
good
now
related
to
agriculture.
Amin
we
can
discuss
about.
E
This
number
is
more
or
less,
but
clearly
I'm
in
harsh
environments
is
very
important,
because
if
you
put
sensors
I
mean
in
in
a
field,
I
mean
you
don't
really
know,
I
mean
what
what's
gonna
happen.
Their
communication
self
properties
are
all
important,
I
mean
as
well
as
packaging
because
for
sure
is
a
non-standard
place
where
to
put
a
computational
device.
E
E
Very
sorry,
so
centralized
versus
edge
I
mean
clearly
you
know,
agricultural
field
I
mean
we
need
some
sort
of
edge
architecture,
and
if
you
need
a
direction,
I
mean
we
need
to
have
computation
at
the
edge
or
computation
like
within
the
network.
Now
for
doing
that,
we
need
some
specific
hardware,
which
is
needs
to
be
programmable.
Instead
of
all
this
stuff,
you
read
there
so
data
filtering
content
based
routing.
We
can
use
p4
languages
and
the
hackathon,
and
you
know
they
should
do
at
least
some
data
analysis
and
some
messages.
E
Now,
if
you
put
basically
intelligence
in
the
network,
so
we're
talking
about
network
enabled
artificial
intelligence.
That's
why
I
was
a
clear
division:
intelligence,
if
you
consider
it
as
calculation
of
probability
of
something
happening
or
not
happening
according
to
parameters
and
coming
in
from
from
an
input.
Basically,
that's
what
agriculture
needs
right.
You
have
a
bunch
of
parameters
and
you
have
to
predict
how
things
are
going.
What
happened
here
across
something
wrong?
Okay,
now
this
is
the
architecture
which
basically
is
divided
in
four
different
layers.
E
I
mean
with
respect
to
what
is
need
to
be
done
now,
,
distributed
its
layer
very
quickly,
I
mean
this
is
the
part
I
mean
which
is
more
relevant
to
this
working
group
and
is
basically
the
power
one
detail.
Metadata
are
actually
filtered
I
mean
in
the
network
according
to
their
importance,
according
if
there
are
errors,
a
cordials
of
things
which,
which
are
not
basically
fulfilling
I,
mean
what
what
is
expected.
So
you
can
have
local
nodes.
You
can
have
remote
nodes
which
are
connected
either
a
wired
or
wireless
way.
E
That
is
really
not
important,
but
it
is
filtering
a
happened,
I
mean
at
network
level
and
then
I
mean
issues.
I
mean
can
be
brought
up,
I
mean
to
the
cloud,
I
mean
it's
are
important
or
they
can
be
resolved.
I
mean
on
on
the
spot
if
it's
really
something
a
minor
like
a
little
bit
more
water
or
a
little
bit
more
light
next
steps.
Basically,
we
plan
to
formalize
these
in
a
draft
I
mean
before
Vancouver
and
thank.
B
B
I
H
While
you're
getting
those
up
I'll
introduce
myself
I'm
Jeff
Hill,
fern
I'm
with
Nova
flow,
we
talked
to
you
last
year
about
our
match
action
pipeline.
We
produce
a
very
high
performance.
You
know
six
terabit
switches
that
are
fully
programmable
p4
before
runtime.
So
this
year,
we're
going
to
talk
about
implementing
those
I
run
the
application
group
where
we
take
the
technology
and
move
it
into
solutions.
So
this
is
kind
of
an
Applied
Research
Group.
H
C
H
Yeah
next
next
slide,
so
they
gave
you
a
real
quick
overview,
we'll
go
through
these
first
slides
very
fast
one.
We
talked
about
a
new
use
case,
it's
going
to
be
with
int
telemetry
out
there.
This
is
a
very
huge
deal
in
the
in
the
programmable
pipeline
world,
where
each
packet
starts
to
carry
the
metadata
about
what
it's
doing
and
provides
completely
new
opportunities
for
how
you
know,
intelligence
that
you
gather
in
the
network
how
you
run
the
network,
you
know
compute,
you
can
do
in
the
network.
H
In
our
case
we're
going
to
go
against
a
requirement
of
how
do
we?
How
do
we
figure
out
a
latency
which
is
kind
of
just
the
very
basic
thing,
but
in
this
case
you're
going
to
ask?
How
do
we
do
it
against
passive
tools?
You
know
how
do
we
do
it
against
things
that
aren't
int
capable
out
there
I'm
gonna
introduce
the
environment.
We
had
we're
going
to
talk
about
the
tyranny
of
int
date:
data.
It's
not
as
great
as
it
sounds.
H
It
comes
with
a
high
price,
we'll
talk
about
that
and
then
some
strategy,
some
actual
ways
to
deal
with
that.
So
next
slide,
so
whenever,
whenever
you
about
telemetry
you're,
going
to
get
a
slide
like
this
you're
going
to
see,
pictures
like
this,
which
is
packages,
are
moving
through
the
network
and
every
hop
they
get.
Data
pushed
into
them
that
talks
about
the
telemetry
in
there
and
at
the
end
the
telemetry
data
gets
popped
off
and
sent
off
to
an
engine
looks
great,
but
this
is
a
forklift
upgrade.
H
H
The
interesting
thing
here
is
that
every
participant
is
a
capable
int
device,
so
it
has
to
be
up-to-date
on
the
stuff
next
slide.
So
art.
Our
question
is:
what
could
we
do
to
deal
with
things
that
aren't
capable?
You
know
now
we
have
a
passive
device.
That
is
an
INT
ready,
big
problem
out
there
in
telcos.
They
have
billions
of
dollars
over
in
these
security
tools.
Firewalls
are
the
ones
you'd
know
the
most,
but
there's
all
kinds
of
security
tools
out
there.
H
So
you
know
if
you,
if
you
have
a
terabit
date
of
data
coming
in,
you
probably
have
half
a
million
dollars
where
the
security
devices
out
there.
So
very
simple
question
is:
what's
the
latency
through
these?
How
how
well
are
these
things
operating?
Nobody
knows
yeah,
it's
it's
tough!
So
what
we've
done
is
said.
Ok
we've
got
a
new
technology,
but
you
know
in
int,
but
we
have
an
old
technology
in
these
firewalls
out
there.
So,
let's
put
a
wrapper
around
it
at
the
you
know.
H
So
next
slide.
So,
in
our
case,
what
we're
doing
here
some
detail,
you
can
look
at
it
out
there,
but
basically,
as
the
packet
comes
in
to
the
switch,
the
the
bottom
center
is
a
switch
down
there.
Where
it
says
packet
broker
service.
We
put
a
tag
on
and
wait
and
I
put
a
timestamp
on
then
I.
Send
it
out
to
the
tool
farm
and
all
all
that
for
the
net
or
paloalto
have
to
do
is
not
croak,
because
it
has
a
new
tag.
H
It
just
has
to
be
able
to
to
process
the
packet
to
its
normal
stuff
and
have
you
know,
have
that
hit
or
pass
through
it
as
it
comes
back,
I
timestamp
it
again,
and
then
we
do
the
standard.
You
know
int
stuff,
I
pop
it
off,
send
it
out
to
an
analytics
light
Lake
and
the
packet
moves
on
as
as
normal.
Now
this
is
I've
gathered.
H
The
data
out
there
I
can
tell
you
exactly
how
much
latency
that
firewall
has
caused
and
any
time
of
day,
but
the
tool
has
it
hasn't
had
to
adapt
at
all.
So
again,
I
think
it's
a
very,
very
good,
general
principle.
So,
let's
look
at
what
Jeff
to
me
to
meant
okay,
so
let's
look
at
the
downside
real
quickly.
Next,
next
slide.
H
You
know
the
interesting
thing
here
is
the
data
comes
so
quick.
You
know
we're
talking
about
terabit
solutions.
We
get
millions
and
millions
packets,
so
you
have
to
reduce
it
quickly.
Next
slide
on
that
and
what
it
says
on
the
on
the
left
hand.
Side
is
there's.
Actually
you
can
start
to
come
up
with
an
algorithm
where
you
weigh
the
cost
of
gathering
stuff
from
a
packet
and
versus
the
value
of
that
and
you
balance
those
out
so
that
you
know
you
don't
gather
every
observations.
You
don't
put
billions
of
records
in
your
database.
H
You
stretch
them
out
and
you
and
you
gather
the
value
across
that
and
that
allows
you
to
now.
You
do
very
profitable
are
very,
you
know,
impactful
things,
but
you
know
keep
the
cost
of
that
down.
It's
a
last
slide
and
here's.
You
know
just
real
world
stuff
working
on
there.
You
know
we
have
a
have
a
tool
farm,
we're
driving
a
terabyte
of
data
through
here.
H
I
have
three
different
environments,
one
where
we
just
did
a
loopback
to
show:
there's
no
latency
the
next
one
where
we
have
a
tool
farm
or
we
have
a
tool
that
has
a
minimum
set
of
security
filters
going
on
it,
and
then
one
has
a
full
and
you
can
see
the
latency
pop
up
on
that
and
none
of
that
data
is
being
collected
actually
over
on
the
farm
it's
being
collected
in
the
network
and
you
know
value-add.
So
we
think
it
fits
very
well
with
you
know.
How
do
you
put?
B
J
Mizrahi,
so
thanks
for
this
talk,
I
wanted
to
ask
regarding
you
mentioned
time
stamps
a
few
times,
and
obviously
you
need
accurate
timing
here.
So
usually
the
timer
or
clock
is
going
to
be
implemented
in
hardware
and
on
the
other
hand,
we
know
that
it's
often
the
case
that
in
these
protocols,
like
IOM
int,
each
of
them
uses
a
different
time
stamp
format.
So
I
wonder.
To
what
extent
do
you
think?
J
H
So
a
great
great
question:
thank
you
to
every
question
the
in
this
case.
You
have
to
look
at
the
latency
being
added
by
the
firewall,
which
is
huge,
and
so
the
difference
between
a
software
timestamp.
You
know
coming
out
of
a
standard
switch,
we're
honest
Ruffino
switch
and
a
hardware
timestamp
coming
out
of
1588
those
values.
You
know
we're
not
looking
at
that
level
of
detail
that
we
need
here.
So
the
soft
you
know
the
the
software
timestamp
coming
right
off.
The
trophy
no
is
more
than
fine.
It
is
two
or
three
orders
of
magnitude.
H
Finer,
you
know
accuracy.
Then
then
we
need
to
measure
the
latency.
You
can
have
other
situations
where
you
need,
so
if
you're
doing
them,
if
you're
doing
the
time,
stamping
and
broadcast
you
need
that
1588
you're
down
to
frame
by
frame,
you
know
it's
tough,
but
in
this
application
the
built
in
the
hardware
time
you
know
time
stamp
on
the
trophy.
No
chip
is
more
than
fine.
B
K
Okay,
good
morning,
everyone,
my
name,
is
Misha
bonfim
I'm.
A
plea:
Deacon
did
at
federal
university
of
pernambuco
and
I'm
here
today
to
to
present
two
practive
lamentation
in
before
they
made
SF
Simone
and
it
aims
to
provide
an
efficient
and
scalable
monetary
system
for
natural
flows
in
SFC
enabled
omics.
K
So,
let's
start
by
defining
what
is
s
SFC
and
as
I've
seen
in
search
function
chaining,
and
it
is
a
mechanism
that
allows
another
set
of
network
service
functions,
connect
to
each
other
and
form
into
a
network
service
through
different
data,
centers
ones
and
applications
of
providers,
and
there
was
it
worth
mention
that
sfc
is
considered
a
key
enabler
for
network
function
visualization.
So
it
is
a
haughty
research
topic,
one
in
this
work.
You
attack
the
problem
of
how
to
provide
monetary
test
for
SF
C's
in
a
scalable
and
efficient
manner.
K
So
taking
this
into
consideration
for
SFC
purpose,
we
argued
that
a
monetary
test
should
take
into
account
all
transmitted
packets
at
the
same
time
that
keeps
memory
and
processing
receptor
levels.
So
we
proposed
the
SF
Symone
an
efficiently
scaleable
monitoring
solution
to
quit,
keep
track
network
flows
in
SF,
sea
level
domains
and
to
achieve
its
goals.
K
This
figure
illustrate
the
subsequent
processing
pipeline.
Basically
it
it
works
with
three
types
of
probably
static
structures,
but,
of
course,
I'm
not
yet
deeper,
because
in
the
interest
of
time,
but
what
I
would
like
to
highlight
two
of
them?
The
first
one
is
a
count,
means
cap
that
is
responsible
for
the
detection
and
filtering
of
large
flows.
For
this
it
use
it
solves
the
approximate
heavy-hitters
problem
and
the
other
structure
is
invertible
blue
lookup
table
that
it
is
responsible
for
keep
track.
The
flow
records
of
these
future
at
large
rooms.
K
The
subsidy
is
basically
a
before
program
and
it
is,
it
is
in
charge
of
the
SFC
data
plane
for
this.
We
created
before
reader
for
the
network
subsea
reader
that
or
nsh
it's
worth
mention
that
in
SH
er
is
responsible
for
as
used
by
SFC
for
interconnect.
The
the
network
service
functions
is
encapsulation
protocol.
K
Besides,
we
implemented
two
of
the
main
SFC
components,
the
classifier
and
a
subscription
forward,
and
for
this
we
have
used
it
before
tables.
As
you
can
see
in
the
figure
and
of
course
we
provide
a
self-similar
reference
implementation,
and
for
this
we
have
used
it
before
register
to
implement
all
the
probability
structures
that
they
use.
K
Moreover,
we
also
extend
the
behavioral
model
in
the
before
compiler
code
in
order
to
support
more
hash
functions
in
order
to
the
solution
works
properly.
And
finally,
we
have
the
system
controller
that
we
implemented
in
Python
and
it
used
it
before
a
hard
time
to
interact
with
other
SF
C's
switches
and
it
basically,
it
has
too
many
functions,
the
first
one.
It
is
responsible
for
creating
an
a
removing
a
subsist
on
the
underlying
infrastructure
and
the
second
one.
K
D
K
D
K
D
Know:
okay,
so
so
I
see
clean.
Another
I
feature
a
drop
to
be
submit
to
the
other
working
group.
Are
we
talking
about?
Actually
you
can
I
download
you
some
architecture,
extra
data
structure
like
sketch
to
detects
the
elephant
flows
and
you
can
apply
the
some
function
like
see
int
in
the
previous
talk
to
actually
monitor
the
performance
of
this
a
large
flows,
so
that
makes
our
the
function.
That
is
the
more
reach.
If.
N
A
N
You
so
quick
reminder,
so
this
draft
is
discussing
so
what
we
think
in
never
come
computing
or
in
network,
really
means
in
say,
coin
research
discussions
and
explores
like
different
options
and
compares
them.
A
little
bit
also
looks
a
bit
at
you
know:
different
interesting
forms
of
computing,
so
you're
kept
at
an
example
earlier
and
yeah
tries
to
create
some
basis
for
a
discussion.
So
what
what
coin
could
or
should
should
look
at?
N
So
there
is
lots
of
computing
in
the
network
today,
so
we
we
heard
a
few
examples
again
just
before
us,
so
things
like
smart
NICs,
web
server
CD
ends
various
cloud
platforms
quite
often
when
we
say
H
computing.
This
is
typically
what
you
typically
mean
is
extending
that
the
cloud
computing
concepts
so
manage
infrastructure
to
specific
hosts
at
the
edge
so
co-located
to
base
stations,
for
example.
So
we
think
that
so
these
approaches
are
applied
more
or
less
today,
and
probably
don't
need
that
much
coin.
Research
anymore.
N
So
this
is
a
probably
rather
static
environment
that
could
be
now
a
view
may
be
used
as
a
say
execution,
environment
or
something,
but
that's
not
something
that
we
are
I
think
we
think
there
and
the
interesting
research
questions
are.
If
you
look
at
the
other
side
of
the
spectrum,
there's
all
kinds
of
super
relevant
super
successful
deployment
of
application,
layer
streaming
and
data
processing
frameworks,
typically
for
the
cloud.
N
So
these
are
in
general
application
layer
solutions
that,
in
case
of
fling,
for
example,
allow
you
to
arrange,
processing
steps
in
a
pipe,
for
example,
that
you
can
program
provide
scalability
for
that,
and
so
on.
These
systems
are
really
nice.
They
run
well
in
the
cloud,
but
because
they
run
as
overlay,
they
cannot
take
advantage
say
of
the
network
that
well
so
don't
have
the
full
visibility.
It's
a
bit
like
connecting
functions
through
yeah
virtual
pipes.
N
N
N
We
just
heard
about
monitoring
for
that,
so
the
training
has
been
designed
to
chain
functions
in
in
the
telco
cloud,
so
the
it's
called
Gao
LAN,
so
things
that
you
know
process
data
I
was
a
certain
flows
in
it
with
a
certain
trust
model.
We
had
a
description
earlier
and
we
got
some
some
some
comments
with
I
think
four
useful
extensions.
N
So
in
general,
a
function
training
is
it's
about
flow,
a
packet
steering
and
typically
encapsulate
spec
it's
so
they
can
arrive
at
the
right
service
function
forward,
for
example,
I
think
just
right
now,
just
in
November
few
colleagues
published
our
cat6
17-7,
which
is
an
extension
I
proposed
an
optional
extension
to
the
SFC
framework
that
is
using
a
one
layer
of
indirection.
So
it's
using
a
named
based
scheme
for
Mia
for
naming
the
functions
and
then
describe
how
this
could
be
mapped
to
lower
layer.
N
So
that's
the
it's
a
coin
system.
If
you
want
that
is
implemented
with
information,
centric
networking,
and
so
the
idea
is
that
wanted.
You
know
treat
computing
as
a
first
class
citizen
in
a
system
and
make
it
possible
to
reason
about
network
computation,
so
I
have
a
system
where
you
can.
You
know
scale
out
so
function.
So
if
like,
for
example,
you
have
say
function,
there
is
yeah
popular
or
needed
a
lot.
N
N
N
So
nodes
could
be
part
of
you
know:
different
distribute
application
context,
so
they
they
could
offer
the
resources
to
this
route,
application
a
and
P,
and
maybe
I
would
try
to
pick
between
those
and
so
in
that
in
a
disabled
application
system.
We
assume
that
we
are
able
to
instantiate
invoke
engines
on
that
platforms,
and
then
we
distinguish
between
different
types
of
functions
or
resources.
N
So
stateless
functions
like
the
merger
but
enable
idempotent
operation
stake
as
actors
so
that
something
that
keeps
state
and
also
data
so
kind
of
what
the
decision,
how
to
you
know,
lay
out
the
the
graph.
We
also
consider
where
data
resides
and
so
the
application
semantics,
and
so
some
some
resource
allocation
strategies.
Then
dynamically.
N
You
know,
transfer
parameters,
transfer,
results
back
and
so
on
and
it's
it
doesn't
make
any
assumptions
on
so
how
complex
those
functions
are,
so
it
could
be
really
small
ones,
but
maybe
also
more
complex
operations,
and
one
function
can
also.
You
know,
of
course,
dynamically
trigger
invoking
other
functions
somewhere
else,
and
so
in
system
like
that,
you
have
to
manage
information
like
where
are
the
functions.
So
how
are
the
resources
utilized
or
loaded
overloaded?
How
is
the
current
dynamic
I'm
a
key
observed
performance?
N
So
trying
to
put
this
into
our
coin
discussion
here.
So
the
coin
elements
that
we
use
in
that
system,
and
so
we
are
managing
resource
availability
and
so
load
information
and
disseminate
that,
and
so
we
are
using
this
debilitated
actress
for
that
so
Sierra
T's
and
these
are
shared
by
the
nodes
that
take
take
part
in
the
distribute
application
context.
We
have
a
transport
and
remote
method,
invocation
model
that
is
using
a
system
called
rice
or
remote
method
invocation
in
ICN.
N
Another
piece
of
work
we
we
did
earlier
for
deciding
dynamically
so
was
a
maybe
late,
binding
concept
where,
where
a
function
is
actually
going
to
be
executed,
we
use
ICN
forwarding
hints
in
this
system.
The
programming
and
execution
environment
here
is
Python.
So
again,
as
I
mentioned
before,
the
system
itself
is
general
enough
that
it
wouldn't
have
to
be
just
Pisan,
but
we
used
it
in
the
system
and
in
terms
of
categories
for
computing.
We
distinguish
functions,
actors,
data
and
the
naming
of
a
function,
or
that's
that's
done
by
ICN
naming.
N
Yeah
analyze
the
the
resource
description
and
then
make
decisions
like
where
certain
functions
would
be
allocated
and
and
executed,
and
so
this
is
a
bit
of
terminology
that
we
are
using
so
a
program,
that's
the
set
of
mutations
requested
by
user.
So-
and
you
know
describing
this
because
I
thought
it
could
be
interesting
to
you
know:
I
have
some
kind
of
terminology
just
for
comparison.
How
we
deal
with
you
know
certain
concepts
in
this
particular
system:
a
program
instead
instance.
That's
the
instance
of
the
program
that
we
are
currently
executing
function.
N
N
So
in
the
program
when
we
describe
function
execution,
we
get
back
a
handle
that
later,
allow
allows
us
to
to
retrieve
the
extra
computation
result
and
so
worker
we
use
that
term
for
specifying
the
exact
locus
of
a
function
or
actor
in
a
particular
program,
instance,
and
here
so
here's
an
example
of
a
distributed
application
or
program
that
we
define
in
in
Python.
So
this
is
a
regular
Python
code,
and
so
we,
you
know
edit
these
decorators
that
allow
us
to
describe
what
type
of
you
know
remote
function
we
are.
N
Financial,
a
transparent
function
or
opaque
function
and
ACTRA,
and
so
we
are
programming
this
in
the
system.
All
the
nodes
that
participate
in
one
program
insulin
needs
need
to
have
that
program
and
then
well
as
a
as
any
other
program
in
a
turing-complete
system.
We
cannot
predict
what
function
will
be
called
at
what
time.
So
this
is
an
dynamically
decided
and
so
the
then
the
nodes
basically
share
this
computation
graph,
so
it
is
known
where
data
resides,
and
so
what
is
the
training
of
the
function
and
so
on,
and
this
graph
is
constantly
updated.
N
So
in
this
distributed
data
structure
and
was
non-conflicting
merge
operations,
so
it
could
be
that,
for
example,
here
this
function
called
extract.
Extract
features
is
allocated
on
note,
1
by
say
in
one
branch
of
the
program.
Another
branch
may
locate
it
at
node
2,
and
so
these
information,
these
decisions
are
shared
in
this
year,
DT
and
then
could
be
merged
with
this
non-conflicting
set
merge
operation
so
that
we
kind
of
consolidate
this
this
compute
graph,
each
node
has
something
like
we
call
task
scheduler
so
that
dynamically
makes
a
decision.
N
So
when
I
I
see
in
my
program,
analysis,
ok
I
have
to
execute
this
function
now.
Next.
That
makes
the
decision
where
this
should
actually
be
executed.
So
it's
possible
that
the
function
was
quite
likely
has
already
been
allocated
when
in
sense
on
node
and
the
system
has
information,
for
this
could
be
say
in
the
forwarding
information
base
or
somewhere
else.
Here
we
are
using
these
IC
and
forwarding
hints
to
to
steer
the
request
so
that
we
may
be
know
specifically
so
know.
N
N
N
So
we
have
certain
mechanisms
in
ICM
that
could
help
us,
maybe
to
leverage
say
features
of
the
network
system
directly
and
make
this
quite
elegant,
so
coming
back
to
the
coin
direction
draft.
So
when
we
talk
about
computing
in
the
network,
so
this
is
clearly
just
more
than
forwarding
packets
to
notes,
so
that
happened
to
you
know,
live
on
the
ends
or
processes
or
to
host
VMs
or
processes.
So
this
can
be
done
today.
N
So
what
I
think
it's
really
interesting
and
has
lots
of
potential
is
really
embracing
the
idea
of
supporting
disability
computing
by
trying
to
leverage
the
concepts
and
mechanism
that
we
know
from
from
networking
and
try
to
build
a
better
combined
system
so
better
than
just
building
better
pipes
for
the
draft.
So
we
want
to
document
more
relevant
and
representative
use
cases.
Somebody
mentioned
with
me.
Second
routing
I,
think
that
it's
a
nice
addition-
and
so
we
see
this
as
a
contribution
that
should
help
the
discussion
in
the
group.
N
O
O
N
O
To
them
the
disability
computing
so
you're
saying
that
the
our
energy
would
only
you
know,
want
to
take
on
maybe
smaller
building
blocks
and
not
all
because
these
all
look
like
very
much
distributed.
Applications
right,
so
I'm
not
sure
to
to
into
which
level
you
want
to
constrain
the
are
energy,
and
then
it's
it's.
It's
really
hard
to
figure
out
from
these
are
just
examples
all
given
right
right.
N
O
Yeah
but
I
mean
the
the
distributor
might
in
in
front.
What
do
you
call
Internet
infrastructure
right?
What
is
not
going
to
be
in
fact
into
a
future
Internet
infrastructure
here
right?
So
it's
right,
so
yeah
well,
I,
think
I
mean
III.
Think
a
clear
delineation
or
so
in
a
description
would
help
very
much.
Can
I
have
a
second
point:
organic.
Q
Perkins
come
if
I
can
just
follow
up
on
this
I
mean
the.
There
is
clearly
to
some
extent
overlap
between
the
work
happening
in
this
group
that
the
naming
decentralization
work
and
energy
and
some
of
the
working
on
in
the
icy
energy
I
don't
see
that
as
a
problem,
there's
no
requirement
in
the
IRT
F,
the
groups
are
completely
distinct
and
don't
to
at
least
some
extent
overlap
their
work.
So
I
think
the
focus
is
a
sufficiently
difference
that
this
is
yeah
small
overlaps.
We
have
a
fight.
O
N
R
H
R
R
S
Down
from
China
Mobile,
thank
you
for
the
presentation
is
very
helpful.
Just
a
very
clarification
question:
do
you
distinguish
computing
in
network
and
programming
in
network
I
mean
in
terms
of
your
methodology,
so.
N
N
N
N
M
I
jump
in
just
for
a
second,
they
were
on
I'm
a
co-author
of
the
paper
here.
There
are
aspects
of
p4
which
are
relevant
aspects
that
are
not
so
if
the
research
is,
how
do
I
build
a
better
p4,
compiler
I
think
the
answer
is
no
right.
If
the
answer
is,
how
do
I
build
a
better
tkm
base,
switch
to
run
p4
and
I
think
the
answer
is
no.
M
On
the
other
hand,
if
the
question
is
what
types
of
computations
can
I
express
in
p4
on
on
network
style
hardware,
that's
very
relevant,
so
I'll
point
you
to
a
paper
at
hot
Nets
just
a
week
ago
by
Noah
and
some
other
folks
that
basically
demonstrate
just
exactly
what
kind
of
neural
network
our
computations
could
be
done
and
be
expressed
in
p4
on
a
p4
style
in
that
were
computing
element
right,
so
they
have
a
very
nice
investigation
of
what
are
the?
What
are
the
possible
things
you
could
do
and
what
the
limitations
are.
B
I
So,
first
of
all,
your
name,
please
sorry
Tim
Wattenberg,
so
I
like
those
roadmap
drafts,
in
addition
to
agendas.
So
thank
you
very
much
for
writing
this
and
I.
Think
it's
useful.
Second
is
just
a
meta
comment.
I
saw
that
you're
intending
status,
experimental
and
I
know
so.
Besides
RFC
2026
there's
also
a
document
put
up
by
the
iesg
for
the
definition
of
or
why,
when
to
choose
experimental
and
when
informational,
oh
yeah,
okay,
so
all
right
so
maybe
go
for
informational,
that's
all
yeah!
We
should
have
been
the
case.
Okay,
thank.
C
We
have
a,
we
have
a
cop,
we
have
a
question
from
the
chair.
We
were
just
curious
about
whether
you
are
considering
bringing
your
POC
into
like
the
hackathon
or
something
have
you
thought
about
that
right.
T
Okay,
thanks
hi,
my
name
is
Klaus
and
I'm,
presenting
a
draft
on
transport
issues
for
coin.
So
this
is
joint
work
with
I.
Could
sir,
he
presented
our
last
draft
on
industrial
use
cases
in
Montreal,
yeah?
Okay.
So
if
we
are
looking
at
the
transport
layer,
this
is
the
typical
notion
that
we
have
in
the
ITF
on
the
transport
layer.
So
it's
all
computation
and
all
modify
especially
modifying
application.
Payload
is
done
at
the
network
endpoints.
So
in
the
core
network.
T
We
typically
should
not
deal
with
transport
protocols
themselves,
but
if
we
are
introducing
now
computing
in
the
network
and
I'm
in
this
case,
referring
a
bit
more
to
the
edge
cloud
cases.
Of
course,
when
you
ask
how
is
the
transport
layered
and
present
there
I
guess,
you
will
have
something
then,
like
n
times
and
to
end
connections
between
the
let's
say
the
sender
or
the
initiator,
and
then
the
intermediate
compute
point.
T
So
this
is
actually
a
quite
simple
case,
but
if
you
refer
to
the
more
let's
say,
p4
style,
programmable
data
planes,
I
guess
you
will
not
have
a
transport
endpoint
in
your
p4
switch
and
if
you
think
on
use
cases
that
are
discussed,
he
and
the
research
group
and
that
we
have
already
presented
or
that
other
people
think
of
several
interesting
transport
layer
issues
come
up,
and
this
is
what
we
want
to
do
with
this
draft.
We
want
to
raise
the
discussion
on
this.
T
We
thought
about
already
suggesting
some
solutions,
but
we
wanted
to
first
discuss
this
with
the
research
group
and,
of
course,
inviting
everyone
towards
this
discussion
for
providing
yeah
interesting
input
in
order
to
think
about.
How
can
we
do
that?
So
what
you
see
in
this
picture
here
is
how
we
think
of,
if
you
do,
if
you
do
have
programmable
data
planes-
and
you
do
a
bit
more
than
just
a
bit
of
traffic
engineering
with
your
program
with
a
top
lane,
but
if
you're
really
doing
computations
and
sometimes
a
bit
more
heavy
computations
on
your
payload.
T
T
So
I
refer
to
this
as
n
I
I
to
end
okay.
So
there
is
no
simple
solution
to
handle
the
transport
layer
issues
in
the
typical
internet
architecture.
So
that's
why
we
wanted
to
raise
the
discussion
and
the
points
that
we
want
to
address
in
this
first
version
of
the
graph
is,
of
course,
first
addressing.
How
do
we
address
this
intermediate
points?
How
do
you
say
I
want
to
have
this
computation
there
and
there?
How
do
we
address
that?
How
about
the
flow
granularity
so,
on
which
basis
do
you
do
this
processing?
T
Is
it
on
a
packet
basis,
is
on
a
flow
basis
or
message
basis?
How
do
you
authenticate,
maybe
the
computations,
how
to
deal
with
security?
That's
also
what
the
IAB
addressed
or
said
should
be
heavily
addressed,
and
some
other
transport
features
that
are
where
the
problems
are
similar
than
the
first
ones.
That
I
just
mentioned
okay,
so
addressing
who
or
how
do
you
address
these
compute
points
or
these
yeah
intermediate
nodes?
You
can
typically
do
that
by
IP
addresses
and
ports,
of
course.
So
do
you
address
than
your
programmable
data
plane
like
that?
T
Or
do
you
just
want
to
say,
I
want
to
have
this
kind
of
computation
on
the
data
path,
I'm,
not
so
much
interested
in
where
doing
where
I
do
it?
So
it's
a
bit
like
more
ICN
style
or
do
I
want
to
have
more
location-based
addressing
do
you
want
to
have
it
more
in
a
loose
addressing
star
that
you
are
not
really
taking
care
of
in
which
switch
it
happens,
or
do
you
want
to
give
a
strict
sequence
of
where
which
computation
should
happen?
T
T
That's
why
I
put
this
point
us
in.
Of
course
there
are
other
research
groups
or
working
groups
that
address
several
issues
and,
of
course
we
put
these
pointers
in
to
get
into
discussion
with
them
flow
granularity.
So
how
about
the
processing
granularity?
Do
you
do
just
do
it
on
a
packet
basis
where
you
probably
have
only
a
low
state
requirements
or
do
you
do
it
on
a
more
message
basis?
T
T
So
it's
a
lot
more
like
a
message
based
analysis,
or
do
you
do
another
less
so
on
a
stream
so
there
it
really
depends
on
the
application
use
case,
how
much
state
you
need
and
in
order
how
to
reserve
the
buffers
and
and
so
forth.
For
that
and
that's
for
us
also
a
transport
layer
issue.
Another
problem
is
authentication,
so
probably
you
want
to
know
as
an
endpoint
who
touched
your
payload
and
who
did
which
computer
who
did
a
certain
computation.
So
what
was
changed?
Who
made
the
changes?
T
Maybe
how
can
you
synchronize
the
states
among
these
changes?
So
there
is
some
in
our
notion
also.
That's
also
an
issue
for
the
transport
layer
to
sync.
These
changes
among
the
different
intermediate
No.
Okay,
there
are
also
the
East
working
group
may
have
some
input
or
some
relevant
work
for
this.
Another
thing
is,
of
course,
security.
So
the
trend
is
going
towards
fully
encrypted
traffic,
even
in
in
headers.
So
how
should
an
intermediate
node
work
or
do
computations
on
encrypted
payload?
T
That's
also
a
transport
layer
issue
in
Maya
Kay,
in
my
opinion,
that
you
enable
that
or
that
you
provide
solutions
for
that.
So
probable
solutions
could
be
decryption
indeed
intermediate
nodes,
but
then
you
can
ask
what
is
the
the
encryption
good
for?
Maybe
you
have
option
headers
where
just
that
payload
that
is
used
for
computation
make
it
may
be
decrypted
with
special
session
keys
and
so
forth
or
homomorphic
encryption.
But
I
guess
nobody
wants
to
have
that
on
a
p4
data
plane
today,
yeah.
B
T
Then
another
question
is
how
about
retransmission.
So,
for
example,
in
this
case
we
have
a
scenario
where
certain
computations
have
been
done
on
packets
and
in
the
lower
part
of
the
figure
you
see.
So
in
this
when
the
packets,
when
the
packet
passes
here
these
switches,
then
these
computations
have
been
done
on
the
packet
and
then
probably
the
packet
is
lost,
so
how
about
who
is
responsible
for
doing
the
retransmission
and
what
happens
with
the
state?
That
is
in
the
switches
that
is
relevant
on
this
packet.
T
So
if
the
packet
is
retransmitted,
have
these
computations
have
they
must
they
be
done
again
and
maybe
has
the
state
must
that
be
revoked?
So
that's
really
getting
difficult,
then,
for
certain
use
cases,
if
you
count
statistics
or
the
state
is
relevant
on
how
many
packets
so
I
guess
we
do
not
want
to
have
the
notion
of
a
transaction
that
is
revocable
regarding
the
state
in
the
switches
on
that.
T
Okay,
similar
questions
raised
when
you
look
into
congestion,
control
for
control
and
so
forth.
So
these
are
all
questions
who
is
actually
in
charge
of
doing
something
when
something
goes
wrong
or
when
certain
things
happen
in
the
network,
because
we
do
not
have
the
complexity
only
at
the
endpoints,
but
there
may
be
several
intermediate
points
that
do
computations
and
changes
on
the
payload.
The
thing
is,
there
is
no
simple
solution.
There
are
different
features
that
heavily
depend
on
the
use
cases
you
want
to
realize
and
so
forth.
T
So
that's
why
we
wanted
to
raise
the
discussion.
So
these
are
the
typical
use
cases
that
have
been
discussed
in
the
working
group
already
so
data
center
computing
stuff
in
the
network.
We've
suggested
industrial
networks
and,
of
course,
you
can
think
of
the
general
Internet
of
doing
things,
and
you
see
every
use
case
have
different
requirements
and
need
different
transport
features
here.
So
the
idea
of
the
draft
is
since
there
is
no
one
saw.
One
solution
fits
all
thing
that
we
raise
the
discussion
that
we
get
also
additional
feedback
by
others.
T
Q
I
I
think
I
mean
you're
talking
about
how
a
different
application
scenarios
in
different
use
cases
yeah
the
transport
and
and
I-
certainly
agree
that
this
is
that
this
makes
sense.
I
think
the
computation
model
you're
using
on
the
intermediate
nodes
also
affects
the
transport.
The
more
item
potent
you
can
make
it.
Q
For
example,
the
less
state
you
keep
in
the
network,
the
easier
some
of
the
returns
of
questions
clearly
get
and
phonetic
that
so
I
think
there's
a
clear
correlation
between
the
computation
model
and
needs
of
the
transports,
and
they
should
be
discussed
together
and
the
pregnant
programming
model.
Mm-Hmm.
O
Just
to
follow
up
on
that
right,
so
you
started
with
saying:
if
we
have
p4,
then
we
may
not
be
able
to
have
the
classical
transport
stack
right.
So
I
think
the
starting
point
to
me
would
be
to
classify
the
you
know,
type
of
in
network
compute
options
that
we
see
now
or
in
the
future
that
are
more
constrained
and
what
could
we
do
on
them?
Anyhow
right?
So
what
is
the
compute?
That
could
be
done?
That's
a
starting
point!
That's
the
motivation
to
me
and
I.
Think
there
was
all
of
these.
O
T
Since
we
are
working
on
several
examples
and
and
and
problems
on
this,
we
actually
ran
into
these
transport
issues.
So
we
have
several
things
that
we
did
under
transport
that
that
we
did
with
p4
and
it
always
comes
up
then,
okay,
what
happens
then
and
then-
and
this
is
actually
nothing
that
should
be
solved
by
p4
or
the
programming
model,
but
yeah
I
agree
that
maybe
the
research
group
should
also
look
into
more
the
programming
model.
Here
there.
O
And
I'm
not
I,
think
was
it
a
Colin
or
somebody
else
or
Denis
I
was
it
was
Dave
right,
so
the
it's
really
the
the
degree
of
constraints
right.
So
in
the
IOT
space
there
were,
you
know,
RFC
is
coming
up
as
certain
constraints
on
the
amount
of
memory
and
CPU
cycles,
and
you
know
to
the
extent
that
we
can
qualify
this
constraint
for
the
notes
that
we
look
into
and
then
basically
say
what
we
can
do
for
compute
within
those
constraints.
O
B
N
T
N
Forth,
okay,
so
that
that
could
be
useful
for
the
for
the
group
to
distinguish
these
different
models
and
have
a
good
understanding
and
just
quickly
I
think.
You
nicely
pointed
to
these
even
its
timing
issues,
for
example
in
transport,
and
so
I
mean
what
we
ran
into
earlier
is
I
mean
there
are
released
different
time
scales.
So
you
know
processing
or
application
times.
That's
where
this,
the
the
network
in
our
time
out,
and
we
transmission
time
time
time
scales,
so
I
don't
want
to
sell
anything.
N
U
One
of
the
things
in
the
discussion
between
totalus
and
U
is
precisely
that
this
is
one
of
the
I
guess.
One
of
the
main
challenges
we
have
here
is
precisely
making
converge.
H
patterns
and,
let's
say
general
internet
patterns,
and
it
is
something
that
is
a
real
real
challenge.
I
would
say
is
one
of
the
first
things
we
had
to
the
rest,
because
we
have
here
two
universes
yeah
that
are
talking
in
de
Parfum.
That
I
was
getting
a
list
where
you
were
talking
precisely
on
security
and
all
the
like.
U
First
of
all,
don't
believe
that
quantum
quantum
key
distribution,
at
least
is
not
so
far.
I
can
tell
you
we
we
are
running,
I
mean
we're
running
some
some
pilot
saying
in
production
network.
So
so
we
could
play
with
this,
at
least
with
the
keys,
not
with
the
rest
and
perform
something
that
there
are
a
couple
of
additional
technologies
that
could
be
worth
trying
here
when
is
when
it
can
was
a
multi
context
crypto
when
it
was
when
the
kitchen
I
know
that
has
being
banned,
I
mean
for
very
good
reasons.
U
When
it
comes
it's
a
challenge
for
privacy,
etc
in
general
exchanges
in
this
context
could
have
some
some
play.
Some
role
to
play,
and
second
and
second,
is
something
when
you
mention
about
authentication,
how
do
I
have
to
look
at
the
ace
I
have
not
but
see
in
the
in
SSC?
We
were
working
in
something
that
is
called
proof
of
transits.
Okay,.
G
U
T
P
U
U
M
M
Pretty
quick
so
after
I
read,
this
I
reached
a
very
different
conclusion,
which
is
that
if
you
have
to
consider
all
of
these
issues
all
over
again
in
this
slightly
changed
context,
we
may
be
completely
missing
the
boat
in
the
sense
that
maybe
we
don't
want
to
have
a
transport
layer
with
an
identifiable
transport
protocol
for
these
con.
For
these
things.
M
So
I
would
sort
of
maybe
add
in
here
the
possible
thing
that
we
may
be
exploring
a
gigantic
rabbit
warren
that
if
we
pop
up
a
level
and
say
what
do
we
actually
need
for
a
distributed
computation
to
run
on
these
on
a
given
underlie
underlying
topology
without
abstracting
out
transport
separately
from
the
computation,
we
may
wind
up
with
something
dramatically
simpler
and
dramatically
more
powerful.
That's.
T
B
V
The
media
change
from
last
version,
who
gave
more
comments,
have
been
crated
and
categorized
them
to
performance
function
and
the
management.
So
this
this.
These
are
other
comments.
We
cracked
it
in
this
version
and
performance,
you
can
mod
become
requirement.
What
is
written,
C
and
the
reliability
which
depends
on
the
service
demand.
The
delay
can
be
divided
into
in
time
and
to
on
time
which
corresponding
to
low
linsday
and
the
deterministic
TV
latency,
for
example,
for
the
industrial
internet.
It
may
need
deterministic,
latency
and
such
as
the
motion
control
and
for
the
consumer
internet.
V
It
may
need
loading
say
for
the
gaming
and
video
video
and
the
reliability
includes
the
transmission
pass
and
packet
loss
rate,
and
we
can
give
some
existing
technologies
to
solve
those
problems,
and
the
performance
comment
is
in
high
concurrency.
It
is
because
a
number
of
computing
nodes
increase-
and
we
made
this
trip
notice
some
computing
and
other
algorithms
to
different
nodes.
So
there
may
be
a
lot
of
parallel
computing
between
nodes,
with
trend
of
interconnection
of
everything
in
the
future,
and
it
will
bring
a
great
challenge
to
the
network
connection
and
the
performance
to
come.
V
Three
is
security:
it's
not
mean
the
traditional
security
about
the
data
or
the
network.
It's
just
because
multi
domain
networks
may
not
only
be
able
to
communicate,
who
ate
it
with
each
other,
as
they
may
need
to
be
analyzed
analyzed
with
each
other,
such
as
the
protocols.
For
example,
operators
network,
make
code
tip
and
go
deeply
into
the
vertical
industrial
internet
in
a
user
site
to
provide
a
better
network
service
so
may
bring
some
convenience
problems
with
his
operators
network
and
the
users
network
and
going
through
the
function
requirement.
V
The
one
is
computing
about
scheduling
his
meaning
to
say
and
dynamic
computing
power
matching
is
carry
out
not
only
based
on
the
network
status,
but
also
need
to
consider
computing
resource
to
achieve
optimal
user
experience
and
the
computer
resource
information
can
be
exposed
to
it
are
each
other
and
to
is
function
based
addressing.
It
means
the
application
and
components
deconstructed
on
the
server-side
and
distributed
to
to
the
cloud
platform.
V
So
the
kind
of
requirements
are
the
management
one
is
course
demon
management.
It's
true.
The
guarantee
is
a
untoned
network
management
to
meet
the
needs
of
different
network
table
not
table
and
the
performance
function,
which
invokes
course
toe
and
network
management,
and
this
is
simple
management.
It
doesn't
mean
that
we
just
need
a
few
functions
to
management.
It
is
because
scheduling
and
cooperation
among
the
different
networking
month,
operators
and
users
are
very
complex
problems,
so
we
need
an
effective
management
system.
V
W
W
The
other
said
low
use
case
I
introduced
inside
the
meeting,
so
it
is
the
one
edge
cloud-based,
the
condition
of
AR.
So
basically
your
mobile,
your
your
mobile,
you
AR
app
on
your
mobile
phone,
will
send
the
image
or
stream
of
images
to
the
edge
and
then
the
result
of
a
result
of
the
conditioning
will
come
back
to
your
mobile
phone,
so
face
recognition
is
kind
of
lightweight
and
simple,
but
object
and
the
emotion
motion
recognition
can
be
difficult.
W
Okay,
so
this
first
slide
is
really
what
I
want
to
elevate.
So,
basically
yeah.
The
idea
comes
from
that
the
service
to
service
characteristics
in
the
education
when
the
service
equivalents,
so
there
will
be
hundreds
of
nodes,
provide
the
equivalent
service
to
cloud
to
ACTU
clients
and,
due
to
the
edge,
is
kind
of
limited,
have
a
limited
resource.
Typically,
the
number
from
410
server
ten
number
ten
servers
and
also
the
edges
less
reliable
than
cloud.
It
cannot
be
scaled
out
scale
out.
W
So
this
kind
of
dynamics
means
so
the
which
one
is
optimal:
service
instance
to
the
specific
country,
client
or
specific
users.
It
can
be
a
dynamic
subject
to
a
pregnancy
or
load,
Network
condition,
etc.
So
it's
easy
to
use
any
cards
to
address
the
service
equivalents,
but
can
we
make
more
dynamic
to
adapt
to
the
condition
and
a
second
condition
means
the
network
condition
and
the
service
condition
so.
B
Jeffrey,
this
is
like
definition
of
a
challenges
and
another
research
project.
What
happened
at
the
meeting?
I
think
this
is
what
we
would
like
to
know
and
also
I
have
a
question.
There
is
already
in
this
group
a
a
draft
on
a
are
at
the
edge
and
cloud
and
I
would
like
really
to
know.
What's
the
difference,
yeah.
W
I
will
come
to
that
so
basically,
this
proposed
he
is
trying
to
leverage
any
cost.
The
entities
along
the
routine
parts
select
path,
the
path,
selecting
not
how
the
the
emphasis
on
the
applications
use
case.
So
there
are
two
challenges:
when
is
the
flow
affinity,
because
you,
you
should
be
avoided
to
loot
different
effects
to
the
of
the
same
flow
to
different
service
instance.
This
can
be
easily
solved.
If
you
you
able
to
establish
a
flow
table
right.
M
W
B
W
Yeah
so
then,
several
as
to
implementation,
I
introduced
to
one
years
based
on
BGP
by
my
company
and
is
a
contra
plane.
Basically,
it's
associated
to
to
address
when
any
hostages
and
the
one
unicast
adjust
to
win
to
every
service
instance.
So
there
are
banding
procedures
and
protocol
concept,
and
some
preliminary
tests
introduced
and
also
the
other
presentation
from
rich,
using
also
their
leverage
in
the
costs
in
context
of
a
subsea
and
V,
but
they
are
using
was
appears
now
and
this
a
kind
of
summary.
W
So
basically,
the
emphasis
is
on
the
dynamic
in
the
past.
Probably
I
will
call
it
CF
and
Lancaster
to
differentiate
this
from
CF
CF
and
a
seein
okay.
Also,
these
two
similar
a
few
questions
unanswered
sample,
which
is
the
service
and
is
the
service
replacement
in
the
scope.
But
the
answer
is
that
the
main
emphasis
is
on
the
path
selection
selection,
not
not
exactly
those
service
placement,
although
that's
part
of
the
photo
solution.
B
There's
been
confusion
because
this
work
uses
a
macron,
MC
FN
that
has
a
very
different
meaning
in
this
research
group
and
we
are
working
on
finding
semantics
that
will
make
sure
that
we
all
understand
what
is
what
so
we're
out
of
time.
I
think
I.
Thank
you
very
much.
We
had
eight
also
eight
people
on
on
the
line.
At
one
point,
there
are
72
people
here
at
work.
Also,
if
people
haven't
signed
the
blue
sheet,
please
come
thank
you
so
very
much
for
again.
B
Staying
over
on
Friday,
we
plan
to
have
an
interim
meeting
virtually
sometimes
in
February.
That's
going
to
go
on
on
the
list
again.
If
you
haven't
signed
the
blue
sheets
or
haven't
subscribed
to
the
least
the
list,
do
it
and
I
would
like
to
thank
all
of
you
and
also
obviously
my
two
co-chairs,
and
you
know
where
the
we
call
ourselves
a
gem
I'ts,
and
I
hope
we
have
a
long
and
Maurizio's
a
so
and
I
hope
we'll
have
a
long
life.
Thank
you.