►
From YouTube: IETF104-COINRG-20190328-1050
Description
COINRG meeting session at IETF104
2019/03/28 1050
https://datatracker.ietf.org/meeting/104/proceedings/
B
A
C
C
Okay,
thank
you
so
much.
Okay,
so
again
welcome
to
the
first
actually
official
meeting
of
computing
in
the
network
in
Bangkok,
we
were
a
side
meeting.
In
the
meantime,
we've
been
approved
as
a
proposed
research
group,
so
I'm
the
person
in
the
middle
marriage
aasumal
party,
with
my
co-chairs,
jeffery
he
and
Eve,
and
we
have
actually
a
fairly
light
agenda
today,
but
we
intend
to
have
a
lot
of
discussions.
So
this
is
why
we
are
going
to
have
targeted
presentations
and
just
a
few
more
a
lot
of
time
for
discussion.
C
This
is
obviously
the
note
well
that
we
have
to
show
so
everybody
is
aware
of
that.
So
I
won't
go
too
long
in
terms
of
documents
and
locations.
We
have
2d
have
a
wiki
again
since
last
time
we
also
have
an
official
mailing
list
and
there's
one
two:
three
four
there's
six
people
who
are
currently
a
remote
participating
on
meet
echo
again.
This
is
the
agenda
that
was
there
before.
So
we
want
to
talk
a
little
bit
about
the
current,
the
current
Charter
and
some
of
the
goals
that
we
set
for
ourselves
and
eva's.
C
C
We
have
a
paper
that
is
actually
a
common
paper
that
was
presented
at
the
ICN
RG
I
would
called
it
interim
meeting
on
Sunday,
but
it's
absolutely
related
to
stuff
that
we
want
to
do
in
naming,
there's
going
to
be
a
also
a
presentation
coming
from
the
t2
t
RG
in
terms
of
what
they
do
at
the
edge
and
then
we're
going
to
have
remotely
Noah
Silberman
who's
from
Cambridge
University.
Who
has
these
great
ideas
about
what
computing
and
that
at
work
can
become?
And
then
we
want
to
open
the
discussion.
C
We
just
have
a
few
slides
to
make
sure
that
you
discuss
and
then
we'll
stop,
and
we
have
three
ideas
right
now.
One
from
Jeffrey
on
manage
networks
that
is
evolving
I
started
writing
something
on
extended
reality,
which
uses
a
lot
of
edge
a
lot
of
computing
and
a
lot
of
naming
and
obviously
there's
a
draft
from
Eve
and
her
collaborators
on
edge
discovery.
We
obviously
want
more-
and
we
hope
that
the
discussions
of
today
and
the
presentations
of
today
will
spur
your
enthusiasm
to
submit
more
documents
and
make
this
even
more
successful.
E
So
if
we're
in
introduced
just
a
little
bit
more
about
this,
so
so
we
will
explore
where
there
are
common
principles,
objections
and
then
architectures,
etc,
and
also
what's
the
impact
of
such
kind
of
in
network
computing
on
the
traditional
transport
and
the
security,
because
the
packet
goes
through.
This
England
computing,
an
honest
ray
impact
right.
So,
what's
impacted
to
the
reliability
and
the
conscious
control,
so
this
could
be
something
interesting
and
also
others.
D
F
Basically,
on
the
mailing
list,
if
you've
been
participating
so
far,
we
had
some
dialogue
around
the
the
Charter
and
our
objectives
and
what
we
all
tended
to
agree
on.
Even
though
we
come
from
various
disciplines,
some
securely
grounded
in
the
data
center,
others
very
much
at
the
bleeding
edge
of
the
edge
computing
was
that
there's
this
continuum
that
we
shouldn't
really
be
thinking
them
of
them
as
separate
places
for
conducting
compute
in
the
network,
and
so
we
were
calling
it
the
continuum
and
therefore
we
were
thinking
of
time-space
continuum,
everything
human
matrix.
F
Are
you
reloading,
okay,
okay,.
F
Let
me
see
if
the
clicker
works:
oh
yeah,
there
you
go
yeah
yeah.
Let's
go
back
to
the
matrix.
F
D
H
F
It's
sort
of
working
all
right,
so
what
does
the
continuum
actually
really
mean
in
the
beginning?
When
people
talk
about
edge
computing,
they
say:
there's
a
back
end
data
center,
there's
a
there's
on-prem
or
closer
to
the
edges
of
the
network,
data
centers
or
and-
and
that
was
the
framing
of
the
discussion
that
that
edge
computing
was
data
centers,
but
just
closer
more
approximate
to
where
the
actions
happening,
where
the
data
is
getting
created,
where
it's
getting
processed
and
ultimately,
where
the
decisions
are
being
made
and
maybe
even
actuation
happening.
F
But
if
you
watch
what's
happening
now,
one
there's
this
proliferation
of
edges,
not
just
at
the
edge
and
many
providers
providing
edge
feuding,
but
the
definition
of
the
edge
is
fairly
ambiguous
because,
depending
on
who
you
are
you,
may
there
make
these
edges?
Maybe
the
telco
edge
it
may
be
the
enterprise
edge,
it
may
be
the
game,
console
edge,
etc.
So,
there's
this
proliferation
of
edges
and
and
some
of
these
edges
are
happening
in
very
managed
networks
and
others
are
more
in
the
wild.
F
If
you
think
about
smart
cities
and
such
it's
really
about
the
federation
of
resources
that
are
proximate-
and
if
you
follow
the
arc
of
this
discussion,
that
in
the
extreme
everything
becomes
an
edge
or
depending
on
your
perspective,
if
glass
is
half
full
versus
empty,
all
of
the
things
become
data
centers.
So
it's
really
a
matter.
A
reimagining
of
the
data
center
and,
and
so
the
conversation
about
edges,
is
really
a
stepping
stone
to
beyond
edge
computing.
F
So
now
that
we've
established
that
there
are
other
kinds
of
edges,
not
every
edge
is
going
to
provide
the
functionality
that
you
might
find
all
of
the
functionality
that
you
might
find
in
a
data
center,
by
which
I
mean
compute
network
storage,
as
well
as
control,
data
management
and
so
forth,
and
that
now
we've
got
little
pieces
of
the
data
center
scattered
about.
So
there's
really
a
disaggregation
happening
as
well.
Regarding
the
data
center
with
with
regards
to
its
services
the
service
offerings.
F
We
had
a
presentation
last
time
about
the
Buddhist
witness,
which
was
about
the
upstream
flow
of
data
and
then
finally,
smart
cameras
and
that
there's
obviously
a
lot
of
work
on
compute
being
about
the
analytics
of
object,
recognition
and
the
labeling
of
video
streams
in
flight
and
in
addition
to
the
fact
that
location
influences
the
kind
of
compute
that's
happening.
Some
of
that
compute,
as
I
mentioned,
it's
going
to
be
more
managed
than
others,
or
at
least
the
devices
on
which
the
compute
is
happening
is
going
to
be
more
managed
now.
F
The
implications
of
this
continuum
is
that
we
have
to
ask
what's
the
sort
of
form
factors
of
these
elements
in
the
network.
What
are
these
boxes
doing?
What
kinds
of
infrastructure,
and
what's
also
quite
interesting
right
now,
is
where
the
stationary
infrastructure
meets
up
with
the
mobile
infrastructure,
and
so
this
kind
of
conversation
about
computing.
F
But
but
we
could
also
be
talking
about
putting
the
compute
on
base
stations
and
and
access
points,
and
but
then
we
also
have
to
consider
in
the
extreme
that
the
other
places
for
this
compute
could
be
at
the
devices
themselves
and
really
one
of
the
discussions
we'd
like
to
have
today
is
about.
Should
the
devices
be
part
of
but
continuum
from
the
cloud
to
the
edge
and
then
to
the
device.
F
Lots
of
other
questions,
I
won't
go
into
all
the
details,
but
just
to
get
you
thinking
for
the
discussion
is
we're
in
the
architectural
stack,
whether
in
Hardware
firmware
or
software.
Should
this
compute
reside
because
we
tend
to
as
a
community
here
at
the
ITF
at
least
talk
about
network
boxes
and
that
that
compute
is
about
network
functionality
and,
as
Jeffrey
alluded
to
that
for
truly
improved
performance
of
compute
in
the
network.
F
C
C
Okay,
so
we
will
now
have
the
presentations
and
we
decided
that
we
were
also
going
to
have
put
questions
in
your
mind
before
we
start
the
presentation,
so
we
expect
that
there
are
going
to
be
discussions
and
and
questions
about
you
know
what
are
the
goals
of
today's
presentations.
So
obviously
we
want
to
explore
the
coin
continuum
and
the
state
of
the
art
we
want
to
define
the
potential
topics
and
some
of
the
stuff.
We
would
like
the
research
group
to
look
at.
We
want
to
identify.
Obviously
what
else
is
needed.
You
know.
C
There's
a
few
people
I've
been
strong
contributors,
I
see
a
bunch
of
very
new
faces
here,
so
there's
probably
ideas
that
none
of
us
has
had
before.
Also
we
want
to
do
opinions
about
which
we
should
be
doing
and
what
we
are
and
identify
also
obviously
drafts
and
contributions.
So
now
it's
for
use.
Okay.
So
now
you
have
to
tell
me
how.
B
B
The
basic
idea
in
this
is
from
Nadi
on
keyword,
name
based
naming
for
I,
say
an
IOT
means
that
was
presented
in
this
paper
by
these
people
at
the
same
conference
in
2017,
so
I
see
n.
They
basically
offers
a
first
idea
that
you
ask
the
network
to
get
the
piece
piece
of
name
date,
that
you
don't
request
it
from
a
certain
place
in
the
network.
You
just
as
Network,
get
with
this
data
and
then
it
delivers.
B
B
B
B
So
yeah,
that's
what
we're
doing
so.
We
find
the
data
process
and
also
store
the
results
in
case
somebody
would
need
they
say,
make
the
same
request
within
a
certain
time
frame.
It's
a
basic
example.
We
have
devices
that
stores
data
in
caches
in
the
edge
of
the
network,
and
then
somebody
is
interested
in
some
information
and
since
they're
requesting
to
the
network.
B
B
B
So
what
we
are
investigating
here
is
how
is
the
good
naming
for
this
and
I
should
say
this
is
not
final
developed
naming
scheme
or
anything
like
this.
We
were
investigating
this
in
project
horizon
2020.
It's
a
Japan
collaboration
project
per.
We
are
working
on
this,
but
these
are.
These
slides
are
intended
to
be
some
ideas
on
how
this
can
be
done,
so
existing
approaches
includes
having
hierarchical
naming.
B
This
is,
for
example,
used
in
Ison
approaches
as
CCN,
and
then
there
is
this
tag
based
naming
proposed
lien
in
the
paper
mentioned,
and
there
is
RFC
who
explains
how
you
can
do
use
a
hash
based
naming
in
a
structured
way.
It's
coming
from
another
net,
I
sin
approach
netting
for
originally,
so
the
proposal
here
now
is
to
combine
this
tag
based
and
and
I
naming,
and
what
we
would
have
done
is
that
we
have
an
authority
part-
or
you
can
also
think
of
this,
as
the
publisher
is.
B
Someone
is
publishing
some
information
under
a
names,
so
this
authority
can
be
used
also
to
route
to
the
domain
where
the
data
is
published.
You
have
a
digest.
That
is
the
name
of
the
data
that
can
be
used
to
caching,
and
then
we
have
the
function
name.
If
there
is
one
needed
and
then
you
have
a
number
of
keywords
or
tags
or
what
you
want
to
call
it
that
is
used
to
identify
the
data
that
should
be
used.
B
B
B
B
There
isn't
like
I'm
functioning,
because
it's
obviously
the
name
of
the
function
so
then
for
the
keywords.
One
of
the
reason
they
got
into
this
idea
is
is
that
it
turns
out
that
the
structured
data
like
sensors
in
the
building
and
floors
and
so
on
in
our
hierarchical
way.
It's
not
always
easy,
because
different
applications
have
different
ideas,
somebody's
more
interested
in
having
its
focusing
on
buildings,
other
on
all
the
floors
and
so
on.
So
it's
soon
becomes
awkward.
Have
a
hierarchical
names.
B
With
this
keyword
thing
you,
you
can
just
give
a
set
of
attributes
that
the
data
should
match
and
those
data
that
matches
that
set
of
attributes
is
what
is
returned,
but
this,
of
course,
doesn't
scale
globally,
but
in
the
local
IOT
domain.
The
claim
is
that
this
would
scale
so
concrete
example
from
use
UCL
campus
computer
science-
you
have
the
her
part,
then
you
want
the
maximal
temperature
in
the
foyer.
Could
look
something
like
this.
C
B
B
C
K
L
So
my
name
is
Klaus
I'm
from
Austin
University
and
I
was
asked
by
Jeffrey
to
talk
about
our
ongoing
research
in
the
area
of
industrial
networking
and
computing
in
the
network
in
industrial
networks.
So
this
is
I,
guess
not
a
typical
ITF
talk
and
I
haven't
followed
the
work
of
the
of
this
group.
So
far,
so
maybe
there's
a
bit
redundancy
here.
Okay.
So
if
you
take
a
look
at
an
industrial
setting
like
what
you
call
nowadays,
typically
industry
4-0-
you
expect
something
like
this.
L
So
a
lot
of
robots,
doing
smart
manufacturing,
then
of
course,
since
it's
smart
manufacturing,
people
think
this
is
all
connected,
but
there's
one
component
missing
where
all
the
smartness
actually
is
executed.
So
there
are
at
cloud
and
remote
clouds
and
so
forth
and
typically
for
us,
we
see
three
use
cases
when
we
think
of
industrial
networking
and
in
network
processing
in
this
area.
The
first
one
is
network
control,
so
controlling
the
operation
of
the
robots
controlling
the
operation
of
the
machines,
with
all
the
control
loops
and
so
forth.
L
This
is
not
done
any
more
locally
in
the
machine
or
in
some
PLC's.
This
shall
be
done
in
image,
clouds
or
people
even
think
about
remote
clouds.
A
second
use
case
is
collect
all
processed
data
and
analyze
it.
So,
as
I
will
show
later
from
such
a
production
process,
you
can
actually
pull
a
lot
of
interesting
data
on
the
production
out,
analyze
it
and
then
improve
the
manufacturing,
learn
about
it
and
so
forth.
Sometimes
you
also
want
have
immediate
feedback.
L
For
example,
if
robots
shall
cooperate
with
humans
in
the
same
area,
then
you
want
to
have
some
safety
measures
there
and
the
last
thing
is
offline
data
analysis.
So
if
you
collected
all
that
data,
then
you
do
long
term
analysis,
data,
mining,
machine
learning
and
all
that
thing's
in
order
to
find
interesting
things
out.
Ok,
so
let's
dive
a
bit
deeper
in
these
three
use
cases
here,
so
the
first
one,
our
network
control
loops.
So
the
setting
looks
like
this.
L
Ok,
if
you
take
a
look
at
this,
you
actually
have
this
communication
loop
here
and
the
latency
for
these
things
is
quite
high,
even
if
you
have
an
on-premise
edge
cloud.
The
latency
for
certain
control,
loops
for
robot
control
and
so
forth,
is
in
the
areas
of
2-digit
milliseconds
and
that's
too
high
for
yeah
real-time
critical
processes.
L
So
our
suggestion
here
and
what
we
are
working
on
in
our
research
projects
at
the
moment,
is
that
we
use
in
a
track
processing
in
order
to
push
a
lot
of
intelligence
or
a
lot
of
control
functions
from
the
control
algorithms
into
the
switches
and
process
them
in
programmable
switches.
For
example,
with
p4
or
other
means
and
deaths
a
drastically
reduces
the
latency,
and
then
you
can
support
these
processes.
Typically,
I
compare
this
with
the
analogy
of
the
human
body
where
you
have
the
brain,
who
does
the
main
control
of
all
the
actuators?
L
L
I
have
a
simple
example
here:
an
academic
example
typical
control
people
use
inverted
pendulums
in
order
to
show
the
effectiveness
and
the
quality
of
control
of
control
processes,
and
you
see
here
an
inverted
pendulum
where
you
can
control
the
carriage
and
you
try
to
stabilize
you
try
to
move
the
carriage
in
a
way
that
the
pendulum
is
stabilized,
and
in
this
case
the
latency
is
too
high.
I.
L
Think
in
this
example,
it
was
something
like
20
to
30
milliseconds
and
the
control
algorithm
was
running
in
an
edge
cloud
and
depending
on
the
length
of
the
stick.
It's
not
able
to
stabilize
the
process
what
we
then
do
is
we
derive
a
simple
version
of
the
control
algorithm,
push
it
into
a
programmable
switch
and
then,
of
course,
reduce
the
latency
of
the
reaction
tremendously,
and
you
see
it's
actually
running
here,
so
the
the
inverted
pendulum
can
easily
unstable
a
balanced
okay,
but
this
is
an
academic
example.
L
Real
production
examples
here
are-
and
these
are
examples
there,
where
we
are
working
on,
for
example,
here
an
arc
welding,
welding
robot,
which
is
working
now,
but
you
can
see
a
robot
who
uses
a
light
arc
to
weld
pieces
together,
and
here
we
have
control
loops
where
we
require
latency
in
the
single-digit
millisecond
area
and
interesting
things.
Here
are
the
inputs
for
the
sensors
that
we
have.
L
And
here
you
really
need
immediate
feedback
in
the
millisecond
area,
and
here
we
if
we
would
use
edge
clouds
which
are
not
exactly
next
to
the
process,
which
is
typically
the
case
on
the
shop
floor,
then
we
want
to
use
in
network
processing
in
order
to
do
the
basic
calculations
and
the
basic
control
in
the
network.
A
second
example
our
cooperating
robots
where
two
robots
try
to
do
things
together
or
you
have
human
robot
cooperation
and
there
you
have
similar
requirements.
You
have
tons
of
sensors
there,
which
are
networked.
L
Each
sensor
is
connected
to
a
different
network.
A
lot
of
wireless
networks
are
in
the
game
here,
and
here
we
also
try
to
achieve
the
latency
requirements
by
doing
simple
computation
tasks
in
the
network,
also
in
such
cases.
Typically,
when
humans
are
involved,
you
want
to
have
augmented
and
virtual
reality
in
order
to
cooperate
with
robot.
A
special
case
here
is
the
human-in-the-loop
detection.
So
you
try
to
define
safety
zones.
If
the
human
enters
the
agility
range
of
a
robot,
then
you
want
to
shut
it
down
and
therefore
standardizing
that
and
get
it
certified.
L
L
So
this
is
just
a
simple
machine,
just
one
piece
of
a
whole
production
chain-
and
you
can
see
below
that
there
are
several
sensors
with
megabits
or
even
hundreds
of
megabits
per
second
data
producing
which
needs
to
be
transported
through
this
through
the
shop
floor
and
stored
in
the
cloud.
So
our
approach
here
is
also
that
we
apply
filters,
reduction
filters,
compression
filters,
aggregation
filters,
pre-processing
and
so
forth
in
the
network
in
order
to
reduce
the
amount
and
then
enable
the
the
gathering
of
the
data
and
then
the
later
on
processing.
L
Okay.
So
our
proposed
framework
in
this
area
is
that
we
want
to
use
in
network
processing
to
do
simple
tasks
in
the
network,
to
do
network
control,
to
do
reduction
and
pre-processing
of
data
and
to
have
still
the
main
control
in
the
cloud.
But
that
is
then
the
latency
in
variant
or
not
latency
critical
part,
which
is
then
done
in
the
cloud.
What
we
see
as
a
link
to
IETF
here
is
or
why
we
want
to
contribute.
L
Then
now
from
now
on
to
this
working
group,
is
that
we
want
to
raise
the
discussion
about
the
computational
capabilities.
What
should
be
in
these
switches?
So
our
use
case
would
be
nice,
but
just
simple
math
and
must
not
be
or
simple.
Computations
or
K
must
not
be
turing-complete,
but
execution
at
line
rate,
or
at
least
was
predictable
execution
times
would
be,
and
then
we
are
working
on
the
configuration
monitoring
management
issue
so
making
the
open
flow
of
open
flow
version
for
this
for
pushing
computation
into
the
network
to
do
simple
tasks.
L
N
I'm,
the
co-chair
it's
at
any
event
in
Germany,
and
so
this
is
really
nice.
So
thank
you
for
for
this
motivation.
Just
one
little
comment,
so
I
think
one
of
the
really
important
motivation
for
doing
this
is
that,
if
you
think
about
these
control
systems,
it's
not
only
about
latency
but
quite
often
about
deterministic,
come.
N
That
basic
is
just
emphasizing
and
they
need
to
do
something
in
an
epoch
because
well
you
have
two
options.
You
can
extend
the
deterministic
networking
like
to
the
cloud
we'll
just
super
costly
or
you
could
go
to
terms
and
on
lines
that
you
described,
which
and
seems
to
be
attractive.
I
had
one
question,
so
you
talked
a
bit
about
possible
implementations.
We
mentioned
p4
and
so
on.
What
is
your
assumption
about,
say
the
protocols
and
maybe
the
security
obstructions
that
would
be
used?
Do
you
think
this
is
a
say,
control
domain?
N
L
If
I
would
engineer
such
protocols,
I
definitely
would
count.
Security
would
consider
security
from
the
beginning.
If
I
take
a
look
at
a
plethora
of
protocols
that
are
used
in
such
areas,
it's
a
mess.
So
actually
that
should
be
is
a
play
that
that
would
be
a
nice
playground
for
IETF
to
actually
dive
into
industrial
networks
and
because
they
actually
want
to
use
IP,
because
they
think
it's
a
cool
thing.
I
guess
they
do
not
really
understand
what
it
means.
L
A
lot
of
use
cases
we
investigated
use
TCP
for
latency
critical
control
traffic,
that's
yeah
ridiculous
for
us,
but
definitely
I,
would
consider
security
as
an
important
issue,
and
especially
if
you
want
to
do
in
network
processing,
then
the
thing
was
the
encrypted
data
is.
How
do
you
deal
was
that
we
have
several
ideas,
I'm
happy
to
bring
them
here
in
into
the
well.
L
N
Right,
so
what
I
wanted
to
point?
It
is
that,
if
you
basically
have
to
take
this
into
consideration,
then
this
may
have
some
impact
on
the
tools
that
you
guys
actually
lose
use.
For
example,
if
you
think
about
p4
programming,
often
there's
the
assumption
that
you
can
intercept
packets
and
flows
and
do
some
computation
on
those
which
would
probably
not
work
with
many
end-to-end
security.
K
L
L
About
the
end-to-end
principle,
it
means
that
packets
are
not
touched
in
the
network,
except
for
routing
and
changing
those
things
that
are
defined,
and
if
you
have
transparent
elements
in
the
network
that
sneaked
into
the
packet
you
can
discuss.
If
that
is
a
breaking.
But
if
you
actively
manipulate
a
packet
in
the
network,
that
is
for
me
a
breaking
of
the
intervention
yeah.
K
L
C
I
So
favor
ends
no
resistance.
Researchers
had
three
sort
of
tightly
related
questions.
One
is
that
it
seems
like
that
net,
like
quasi
synchronous
communication,
is
important
in
this
vironment
and
even
if
your
process
is
not
Turing
complete,
it
is
in
fact
not
necessarily
synchronous
when
you
hand
something
to
p4
code.
I
Does
that
mean
that
any
program
that's
running
in
the
switch
actually
has
to
be
included
in
the
scheduling
algorithms
for
the
underlying
network?
That's
question
number
one
question
number
two:
how
do
you
maintain
the
synchronous,
clocking?
That's
assumed
and
question
number
three:
have
you
thought
at
all
about
program
composition
in
p4,
because
you
you're
not
gonna,
have
one
program
running
all
the
time
and
have
to
reboot
to
switch.
You
know
if
the
if
the
production
line
you
know
changes
what
it's
doing
so.
I
Sort
of
tied
up
together
with
that
model-
oh
there's,
a
fourth
question
they're
all,
which
is
the
model
here,
seems
to
be.
You
do
data
transformation
in
the
switch,
but
it
you
you
neither
increase
nor
decrease
the
amount
of
data
flowing,
in
other
words,
you're,
not
you're,
not
doing
some
consuming
a
bunch
of
packets
doing
some
computation
in
generating
new
packets.
It
just
seems
you
you're
only
think
about
computations
that
modify
data,
look,
read
and
modify
data
in
packets
as
it
flies
five.
So
so.
L
L
Of
tied
up
together,
I
hope,
I,
remember,
they're,
four
questions.
The
first
two
I
can
answer.
Yes,
I
do
not
assume
always
synchronous
communication
here.
So
all
the
the
work
we
do
with
control
theorists
is
that
we
actually
do
not
require
synchronous
communication,
but
these
control
algorithms
are
able
to
adapt
to
varying
latencies,
but
they
say
the
lower
the
better.
L
The
last
question
I
think
there
you
are
mixing
up
a
bit
of
the
use
cases.
I
showed
so
for
this
use
case.
Of
course
we
do
not
reduce
the
network
traffic,
but
in
this
case
we
do
not
have
a
lot
of
network
traffic,
but
just
a
second.
In
this
case
we
actually
have
a
lot
of
network
traffic
which
we
try
to
reduce
by
filtering
by
categorizing
and
so
forth,
by
reducing
the
position.
So
here
we
really
can
reduce
the
D
the
amount
of
traffic
that
is
produced
tremendously.
We
have
a
paper
on
that.
L
P
I
P
I'm
curious
because
I
actually
I
participated
the
last
year
site
meetings.
So
from
what
you
wrote
here.
Are
you
I
mean?
Do
you
mean
that
there
will
be
a
kind
of
a
southbound
interface,
a
sort
of
standardized,
salsify
interface
between
this
computing
function
and
the
switches
not
between
the
computing
function?.
L
And
the
switches,
but
between
the
you
know,
my
basic
assumption
is:
we
have
some
main
algorithm,
some
main
tasks
in
the
cloud,
but
the
later,
if
it
is
executed
in
the
cloud,
it
doesn't
fulfill
the
latency
requirements
for
certain
use
cases
as
the
first
one
that
I
showed.
Then
we
push
simple
computation
tasks
which
we
derive
from
this
algorithm
into
the
switch
like
filtering
like
fast
reactions
and
so
forth,
and
the
northbound
southbound
interfaces
are
adamant
between
that
so
between
the
cloud
and
the
switch.
L
So
how
do
I
push
simple
computation
computation
costs
into
the
network?
How
do
I
control
that?
How
do
I
manage
that?
How
do
I
deal
with
state,
for
example,
if
the
robot
is
moving,
then
I
have
to
move
state
to
another
switch,
and
then
it's
executed
there
and
so
forth.
So
these
are
interesting
questions
which
I
address
with
this
configuration
architecture
here
or
with
this
thank.
M
L
M
L
Or
feature
or
if
it
would
be
nice,
but
it's
not
recommended
so
maybe
I
can
talk
a
second
about
how
we
started
with
this.
So
we
started
with
a
simple
EBP
F
programs
in
the
Linux
kernel
and
if
you
want
to
download
or
if
you
want
to
load
a
simply
BPF
program
to
the
Linux
kernel,
it
has
to
be
passed
through
the
verifier
and
the
verifier
limits
it
to
4,000
instructions,
no
loops
and
no
other
things.
L
Since
we
are
also
working
in
program
analysis
for
protocols,
we
actually
said
we
can
easily
guarantee
that
a
program
can
have
loops
but
will
not
run
infinitely.
So
typically,
when
you
give
somebody
the
ability
to
run
code
somewhere,
you
are
afraid
of
it
can
run
in
an
infinite
loop.
It
can
happen,
have
bugs
and
so
forth
and
I
guess
up.
L
Writers
are
afraid
if
you
would
have
the
chance
to
download
something
that
they'll
download
some
code
to
their
switches,
which
brings
them
into
infinite
loops
and
other
things,
and
therefore,
typically,
they
either
restrict
the
effectiveness
or
the
capabilities
of
these
execution
environments
by
not
allowing
loops
by
limiting
the
number
of
instructions
they
can
execute
or
by
having
tools
which
actually
can
guarantee
that
such
cases
cannot
happen.
I'm
more
friend
of
the
second
version,
but
yeah
that
depends,
but
of
course,
to
have
two
incomplete
execution.
L
Hello,
I'm
Matthias
Kovich,
but
Huawei,
and
this
is
at
best
a
teaser
what
we
are
doing
and
the
thing
to
think
research
group
about
edge
and
IOT.
The
main
goal
is
to
discuss
with
this
group
how
these
two
research
groups
basically
should
or
could
work
together.
So
a
quick
refresher,
what
we
have
done
so
the
topic
about
edge
computing
and
how
it's
relevant
for
out.
He
was
first
discussed
at
the
IGF
98
in
Chicago
and
Duke
actually
presented
the
compiled
results
from
that
here
in
Prague.
L
Also
the
thing
to
thing:
research
group
people
have
joined
the
coin
meeting
to
see
what
what
is
going
on
in
this
activity
and
how
does
it
relate,
and
we
discussed
in
a
pre-meeting
so
basically
last
Friday,
where
we
should
go
with
the
topic
about
edge
and
IOT.
In
the
thing
to
thing
research
group,
we
had
some
questions,
so
one
of
the
classic
ones
that
you
probably
also
had
is
it's
a
thing
to
thing.
L
Research
group,
the
right
research
group
to
do
this,
work,
the
the
gap,
analysis
and
so
on
showed
already
that
it's
probably
a
good
place
for
the
distributed
and
lightweight
aspects
of
the
challenges
that
that
are
discussed
in
edge
computing,
and
we
also
confirmed
basically
this
Friday
that
it's
a
good
place
to
look
at
it
from
a
device
centric
view.
So
for
an
IOT
device.
What
would
be
the
next
hop?
L
It's
probably
something
that
is
close
to
it,
so
meaning
in
the
edge-
and
we
also
discussed
if
you
have
actually
enough
interested
people,
see
enough
work
force
to
work
on
this
topic,
and
it
showed
that
there
was
actually
a
quite
good
number
of
people.
So
we
have
the
critical
mass,
and
maybe
interesting
was
that
in
particular
for
industrial
applications.
It
was
quite
some
interest
and
then
one
of
the
main
patients.
So
how
does
this
relate
to
to
coin
RG?
L
Since
you
touch
similar
topics,
it
became
very
apparent
with
the
previous
presentation
that
this
is
something
that,
on
overview
B,
we
basically
have
in
common.
So
what
we
thought
about,
maybe
we
have
to
move
some
of
the
topics
here.
However,
we
found
that
there
are
enough
people
with
interest
to
work
on
this
in
t2
TRG,
but
maybe
providing
exactly
this
thing.
Centric
view
might
be
a
good
contribution
to
also
help
defining
the
challenges
and
also
think
about
the
solutions
here
in
in
this
research
group
for
proposed
research
group.
L
Another
question
I
would
have
is
we
are
the
people
from
the
beyond
edge
computing
mailing
list?
So
I
saw
that
there
was
not
that
much
traffic.
There
was
people
basically
now
here.
Would
they
be
people
working
a
bit
more
on
the
tangible
problems,
let's
say
of
IOT
devices,
so
that
we
could
motivate
them
through
to
join
us
in
in
finger
finger
search.
Group
yeah,
that's
something
we
could
discuss
later.
L
To
give
you
an
example
what
we
are
discussing
and
how
we
are
approaching
the
problem.
So
one
example
is
that
it's
kind
of
clear
that
many
of
the
IOT
devices
require
some
supporting
services
because
they
are
constrained
in
a
nature.
So
this
could
be
compute
could
be
something
like
a
simple
unit.
Conversion
of
some
some
of
the
the
numbers
that
you
receive.
It
could
be
a
permit
conversion,
but
it
could
also
be
like
heavy
weight
tasks
like
semantic
reasoning.
To
adapt
to
a
certain
environment.
Storage
is
quite
interesting.
L
L
Quite
interesting
question
is
what
are
these
natural
compute
nodes
actually
for
for
IOT
applications
so
for
configuration
purpose,
the
smartphone,
for
instance,
became
the
go-to
device
and
it's
now
used
in
many
of
the
applications.
But
what
is
basically,
the
computer
storage
equivalent
that
you
would
find
in
these
environments-
and
it's,
of
course,
heavily
depends
on
the
application
domain,
so
it's
different
than
the
home,
then
from
from
a
shop
floor
for
instance.
L
For
this
reason
we
want
to
split
up
the
work,
also
in
the
different
domains
and
basically
figure
out
what
are
the
differences
over
the
approach?
How
we
do
it,
how
we
figure
that
out
is
probably
similar
and
the
different
domains,
but
we
have
to
collect
the
use
cases
which
are
definitely
different
yeah,
as
I
mentioned
already.
L
L
So
virtualization
itself
is
it's
not
a
topic
for
for
the
IR
TF
or
the
idea
itself,
but
we
have
some
problems
that
could
be
interesting,
for
instance,
bindings
or
the
orchestration
logic
has
to
live
somewhere,
so
you
have
to
deploy
it
somewhere
and
these
aspects
of
the
application
are
independent
from
the
lifecycle
of
the
devices
or
the
things
themselves.
So
we
need
some
kind
of
deployment.
Mechanisms
for
this
and
problems
in
scope
are
the
interfaces.
So
how
can
I
deploy
a
certain
orchestration
logic?
L
Yeah,
the
the
next
steps
are
basically
is
theory,
interest
to
set
up
some
kind
of
yeah
more
formally
a
zone
so
that
we
have
a
common
picture.
How
to
work
on
this.
One
question
is
how
it
should
be
coordinated
this
or
should
be
distributed.
Questions
beforehand
should
be
just
exchange.
Some
results
that
we
came
up
with
and
but
interesting
would
be.
If
you
maybe
already
have
some
concrete
work
items,
some
ideas
that
might
be
a
better
approach
from
a
thing,
centric
view,
for
instance,
and
yeah
how
to
continue.
Thank.
C
You
very
much
for
the
sake
of
time,
I'd
like
to
push
the
questions
to
the
list,
because,
obviously
these
questions
I'm
sorry
Dirk.
These
questions,
you
know,
are
basically
questions
for
both
the
team
to
think
mailing
list
and
are
our
main
list.
So
I
would
push
that
there
and
see
what
happens
and
see
how
we
can
coordinate
I
think
there
is
a
way.
Thank
you
very
much
for
this.
Thank
you.
Noah
online
I
will
load
your
slides.
Thank
you.
D
Q
Q
If
we
are
looking
at
the
amount
of
data
that
has
been
going
through
the
network
in
the
last
few
years
and
on
the
forecast,
you
all
know
that
we
are
already
talking
about
data
bytes
per
year
and
about
tripling
the
amount
of
data
that
is
supposed
to
go
through
the
network
within
five
years.
Now,
once
we
start
thinking
about
what
it
means
to
our
communication
infrastructure
and
being
able
to
continue
in
scale
with
this
increasing
amount
of
data
as
well
as
what
it
means
in
terms
of
processing.
Q
Now
we
already
know
that
computing
computer
architectures
reached
a
world
in
terms
of
the
end
of
Moore's
Law,
but
still
over.
The
last
ten
is
the
amount,
the
improvement
of
before
performance
of
service
as
improved
27
fold.
But
when
you
look
on
their
increased
performance
of
networking
devices
over
the
same
amount
of
time,
it
has
increased
too
hard
at
fall.
So
we
have
an
increasing
gap
in
performance
between
network
devices
and
the
CPUs,
and
this
it's
already
an
order
of
magnitude.
If
we
can
please
move
to
the
next
slide,.
Q
And
the
reasons
for
this
gap
is
the
way
that
the
architecture
of
these
devices
is
implemented.
Cpus
basically
move
instructions,
so
the
pipeline
we
are
talking
about
petland,
it
is
64
bit
wide
and
all
the
data
resides
in
the
movie.
In
contrast,
if
you
are
looking
on
switches
and
switches,
we
have
data
moving
through
the
pipeline.
Well,
the
instructions
basically
reside
in
the
memory.
Q
So,
a
few
years
ago
we
started
a
project
that
is
called
can
computing
as
a
network
that
is
specifically
tailored
to
handle
this
growing
gap
in
performance
by
tithing.
What
we
effect
was
a
tiny
tablet:
data
center,
which
pulls
all
the
data
center
components
into
the
computer,
but
having
a
computer.
Well,
the
network
device
is
at
the
core
of
the
computer
and
all
other
components
are
what
you
think
about
these
peripherals.
So
in
the
switches
we
have
limited
memory
resources
we
don't
have
storage.
Q
J
Q
It's
almost
develop,
first
of
all
because
you
already
have
network
devices
within
the
network,
but
the
other
benefit
is
that
you
already
pay
for
most
of
the
power
by
moving
the
traffic's
with
the
switches.
So
you
already
have
the
pockets
coming,
so
you
just
under
utilize
the
network
as
a
resource
next,
please.
Q
So
what
I'd
like
to
focus
now
is
not
on
the
come
in
network
computing
as
much
as
so.
What
can
we
do
with
a
network
computing
and
I'd
like
to
thank
you
for
that,
because
she
is
actually
perfect
for
some
of
their
upcoming
slides
and
the
slight
resonate
with
her
talk.
Can
you
please
move
to
the
next
slide,
so
computing
is
becoming
akin
to
infrastructure,
even
though
it's
not
infrastructure
today
and
all
sorts
of
infrastructure,
whether
that
electricity
or
sewage
or
traffic,
it's
all
deployed
at
swallowing
skill
and
for
vying
needs.
Q
So
if
you
are
thinking
about
a
junctions,
then
we
have
the
mini
roundabout
and
the
roundabout
in
the
bigger
box
junction,
and
then
we've
got
simple
interchanges
and
highly
complex
interchanges
in
computing.
We
don't
have
all
that
right.
We
only
have
the
equivalent
of
mini
roundabouts
and
roundabouts,
which
are
the
mobile
devices
and
the
servers,
and
then
we
have
the
data
centers
as
the
highly
complex
interchange,
but
we
don't
have
anything
in
between.
Can
you
please
go
back.
Q
So
if
you
were
saying
yes,
I've
got
racks
full
of
servers.
It's
like
saying.
Yes,
I've
got
a
junction
with
40
minute
on
the
box.
No
one
wants
to
go
through
40
min
around
about
suppose
a
junction
which
means
that
we
need
to
have
a
better
tailored
computing
infrastructure.
Can
please
move
forward
now,
so
this
is
the
equivalent
of
what
we
have
today.
Q
We
either
have
the
user
going
directly
to
the
data
center
or,
if
you
move
to
the
next
slide,
then
we've
got
edge
computing
in
the
middle,
but
we
don't
really
have
a
higher
key.
So
if
we
go
to
the
next
slide,
we
already
have
in
computing
infrastructure
by
design
high
either.
If
we
are
talking
about
different,
is
within
the
network
reorder.
If
we
are
talking
about
valley,
free
routing,
we
have
by
design
scalability
here.
So
if
you
could
move
to
the
next
slide,
please
so
what
can
we
build?
Q
Computing
infrastructure
that
is
scalable
and
by
scalable
I
refer
to
adding
more
levels
of
computing
within
the
network,
whether
it's
the
edge
computer?
We
have
today
whether
it's
the
higher
performance
other
that
compute
over.
Maybe
it's
more
local
compute
like
computing
to
the
curb,
basically
computing
that
is
placed
next
to
the
user
in
the
first
location,
where
you've
got
some
communication
infrastructure.
Q
So,
if
you
can,
please
move
to
the
next
slide,
so
some
of
the
benefits
here
will
be,
first
of
all,
terminating
the
data
at
the
edge
before
it
gets
to
the
data
center,
which
I
believe
most
of
you
already
agree
with.
It
means
that
we
can
reduce
the
complexity
every
stage.
You
don't
need
the
same
complexity
at
the
edges.
Do
you
need
at
the
data
center
and
you
don't
need
at
the
curb
the
same
amount
of
complexity
that
you
need
at
the
edge?
Q
It
really
is
the
scalability
and
by
scalability
I
mean
not
just
scalability
of
the
networking
equipment,
but
the
scalability.
When
I
think
about
handling
the
infrastructure
over
time
and
being
able
to
sustain
increasing
data
demands,
it
will
also
reduce
power
consumption.
It
is
assumed
that
by
2025
about
20
percent
of
the
world
electricity
service,
yeah
electricity
requirement
will
go
for
data.
Q
Q
So,
first
of
all,
and
just
briefly
in
this
slide,
a
cloud
providers
also
have
an
incentive
to
move
computing
closer
to
the
user
from
their
perspective,
whether
it's
full
set
an
application
and
whether
that's
for
reducing
the
load
on
the
data
center
include
increasing
scalability
next
slide,
please,
but
it
also
and
that's
the
more
ordered
pocket
allows
to
asynch
the
application
deployment
model.
So
today,
from
a
user
and
I'm
using
different
applications,
let's
say
that
I
have
a
Fitbit
the
data
from
the
film.
It
goes
to
the
cloud,
but
I
don't
control
in
which
cloud.
Q
In
contrast,
what
it
proposes,
the
model
where
the
users
picks
it
up:
repute,
service
provider
and
the
compute
service
provider
is
the
one
that
offers
different
applications
and
it
provides.
Privacy
is
a
service.
It
provides
control
over
data
as
a
service.
If
you
can,
please
move
to
the
next
slide
and
I'll
be
going
over
this
slide,
partly
because
they
are
supposed
to
be
animation.
Q
The
idea
is
to
allow
us
to
choose
the
provider
the
same
way
that
with
choose
television
service
provider.
So
if
you
can
move
to
the
next
slide,
please
so
if
I
go
to
a
certain
provider,
that
will
offer
me
a
set
of
application,
just
like
I
have
a
set
of
TV
channels
today
and
if
I
want
more
channels,
it's
the
same
as
picking
also
to
use
Amazon
or
Netflix.
So
there
are
some
more
applications
that
may
be
in
the
cloud
or
in
the
next
level
of
application.
Q
If
you
can
move
to
the
next
slide,
so
the
idea
is
to
turn
compute
communication
into
also
computing
service
provider
will
allow
us
to
choose
our
set
of
applications
that
are
running
within
the
network,
which
are
running
closer
to
the
user
and
providers,
privacy
and
data
control
as
a
service.
This
will
also
in
competitive
competition
in
the
market
and
improve
the
resilience,
because
the
failure
of
one
club
weather
won't
kill.
Q
C
You
so
very
much
no
way
actually,
for
the
sake
of
time,
maybe
we
can
combine
the
questions
of
this
paper
to
an
open
discussion
about
you
know
what
we
wanted
to
do
at
the
end,
because
I
think
this
raises
really
really
good
ideas
about
what
network
computing
or
computing
and
network
will
become
so
I'd
like
to
open
the
floor
and
again,
thank
you
so
very
much
know
and
stay
on,
because
some
of
the
questions
you
may
be
able
better
than
us
to
answer.
Thank
you.
R
So
this,
how
are
you
so
from
a
hobby?
I
have
a
general
question:
I
may
miss
the
understood:
what's
a
real
meaning
of
this
compute
in
network?
Actually,
originally
I
I
saw
this
my
my
sons
actually
closed,
who
what
that's
the
presenter
a
trade
show,
but
to
me
all
the
other
use
cases
or
applications
are
sick
to
me.
I
still
use
a
network
in
the
conventional
way
to
provide
the
connected
connectivity
or
just
working
as
a
fabric.
R
C
I
return
it
back
to
you
what
so
I
think
don't
sit
down.
I
will
leave
also
even
Jeffrey
to
bargain
I
will
return
the
question
to
you.
What
would
you
like
it
to
be
to.
R
C
C
Basically
it
is
what
you
are
describing
and
I
think
if
you
look
at
what
Noah
just
presented,
it
also
related
to
what
you
presented
and
they
and
the
industry
network.
The
industrial
network
is
also
taking
the
applications
and
executing
them
closer
to
the
to
the
user.
So
I,
could
you
maybe
clarify
what
your
clarification
means?
Yeah.
C
Not
it's
not
just
the
edge
I,
think
I
think
we
I
think
no
way
expand
to
this
I
think
Noah
mentioned
the
issues
of
data
centers
and
stuff,
and
we
mentioned
the
continuum,
and
the
continuum
for
us
is
very
important.
For
example,
Jeffrey
is
very
interested
what's
happening
in
the
data
center
and
if
you
look
at
his
draft
there's
a
lot
of
mentioning
of
consensus
and
key
value.
C
Obviously
there
is
a
lot
of
stuff
also
happening
at
the
edge,
and
this
is
what
catches
your
imagination
in
some
way,
but
I
think
what
the
important
word
in
the
presentation-
and
maybe
we
should
put
back
the
matrix
picture-
was
this
idea
of
a
continuum
we
do
not
want
to
limit
it
to
the
edge.
We
don't
want
to
limit
it
to
the
data
center.
We
don't
want
to
limit
it
to
something
in
between
you,
one
actually
to
connect
everything.
Okay,.
N
Right
I'm,
the
coach,
so
yeah
scoping
is
clearly
the
challenge
here,
so
computing
in
the
network
I
mean.
Obviously
there
is
a
lot
of
computing
in
the
network
already,
so
you
know
things
like
CDN
any
of
the
edge
tech
base.
You
couldn't
put
location
servers
here
and
there,
so
we
should
be
quite
clear.
This
is
not
what
probably
this
research
group
is.
It
would
be
about
right,
I
mean
it's
not
just
about
arbitrary
computing,
a
network
already
existed.
N
So
secondly,
I
mean
this
switch.
My
buddy
work
that,
like
know
also
I'm,
presented
it's
super
interesting
and
you
can
do
many
things
I'm
a
bit
wary
that
we
are
inventing
things
like
is
the
end
for
a
network
computing.
So,
basically,
you
know
a
system
where
we
say
always.
This
reduces
complexity,
a
lot,
and
so
we
have
a
system
where
we
can
do
value-added
services
in
the
network.
So,
let's
be
really
clear,
I
mean
it's
ok,
they
are
different
research
directions,
of
course,
but
I
think
what
could
be
attractive
is
to
think.
F
That's
the
key
question,
but
you're
right,
I
think
the
scoping
is,
is
absolutely
important
and
maybe
we
begin
with
the
what's
out
of
scope
and
also
maybe
even
cluster
the
topic
areas
and
decide
where
we've
got
enough
interest
for
people
to
be
contributing
ideas.
Yeah
I
think
this
is
kind
of
an
organizational
howdy
how
and
where
to
get
started.
K
O
From
the
demo
room,
that
was
a
question
from
Pedro
from
n
ICT.
He
says
that
today
we
can
choose
the
apps,
regardless
of
the
provider,
but
with
the
shift
in
the
model,
we
may
be
a
knight
unable
to
choose
an
app
that's
not
offered
by
our
current
provider
and
if
that's
a
step
backwards,
which
I
guess
is
a
question
about
openness
of
the
infrastructure
and
how
to
enable
not
multi-use
infrastructure,
I'm.
C
Was
about
the
fact
that
right
now
we
can
actually
choose
applications
that
are
not
supported
by
our
provider
and
if
we
go
to
a
model
where
you
know
we
we
have
a
set
of
applications
that
are,
you
know,
is
in
a
way
walled
garden
by
the
computing
provider
that
maybe
it's
a
step
backward.
Do
you
have
any
comments
on
this
yeah?
So
the
idea
is
first.
Q
Of
all
to
focus
on
a
subset
of
applications,
so,
for
example,
Facebook
will
never
run
at
the
age
for
Facebook.
You
will
always
need
to
run
in
the
data
center,
but
there
is
no
reason
that
data
for
different
IOT
devices
for
me,
for
example,
for
my
temperature
control
system
or
for
my
Fitbit,
will
need
to
go
to
the
cloud
and
while
only
a
certain
number
of
applications
may
be
supported,
you
can
always
go
back
to
the
model
that
you
have.
They
can
always
use
whatever
the
application
picks
for
you.
Q
You
don't
mean
necessarily
to
use
the
computer
service
provider,
but
as
a
person
that
really
cares
for
his
privacy.
I
really
wanted
to
be
able
to
choose
which
data
centers
guarding
the
data.
Who
is
what
are
the
guarantees?
Privacy,
security
who
is
shared?
Who
is
getting
the
data
shared
with?
And
even
if
we
start
with
a
limited
number
of
applications
and
increase
them
over
time,
because
it
does
require
some
changes
to
the
applications
as
well.
Q
I
already
seen
that
benefits
for
the
user
that
go
beyond
performance
and
have
to
do
with
the
way
that
we
handle
our
data
to
date,
and
you
may
think
also
models
such
as
a
data
box
or
data
box
is
what
I
think
about.
Is
the
software
complimentary
to
this
to
the
can,
so
it
is
not
supposed
to
apply
to
all
applications
that
ever
exist,
but
it
is
certainly
the
idea
that
it
will
apply
to
applications
that
better
to
the
users.
Thank
you
next
question.
S
S
It
ignored
mine
some
thoughts
in
this
space.
It's
clearly
a
very
large
space
right.
We
talked
about
anything
from
sort
of
p4
up
to
or
hot
us
computing
in.
The
Internet
overall
evolve
with
your
Facebook
versus
swimming
somewhere
else
for
industrial
applications.
One
thing
that
might
be
you
might
want
to
consider
is
saying:
okay,
what
is
actually
new
here?
What
is
changing
and
the
sort
of
p4
angle
that
well,
we
might
be
getting
some
different
ways.
S
We
can
do
some
fairly
limited,
compute,
I'm,
not
even
sure
I
would
use
that
term,
but
but
but
sort
of
saying
what
does
it
mean
to
integrate
this
into
a
system
where
you
use
it
and
and
and
start
from
there
and
saying?
How
do
I
model
this
stuff?
How
can
I
think
about
the
capabilities
of
the
network
and,
while
I
do
that
I
sort
of
do
the
normal
controller
is
setting
up
the
paths
the
TSN
support,
whatever
time
sensitive
networking,
so
I
can
actually
think
about
what
does
an
infrastructure
look
like?
F
F
As
you
said,
you
know,
what's
this
kind
of
ecosystem
that
lives
around
it,
one
of
the
talks
that
we
didn't
get
to
present,
which
was
a
which
is
one
of
the
drafts,
is
about
educated
discovery
or
just
the
edge
data
problem
like
who's
working
on
that.
How
do
you
find
or
marshal
that
data?
Where
does
it
go?
How
do
you
name
it
borge,
a
sort
of
apply,
give
a
talk,
talk
about
that.
F
J
So
Nicholas
Vidal,
building
on
what
Eric
said,
one
variant
written
thing
here
assessing
try
to
understand
this
wrote
a
computational
economics
of
this.
They
tell
us
we're
bridging
between
the
networking
and
the
computer
right
and
and
there's
a
lot
of
it's
the
same
problem
with
edge
that
you
basically
don't
know.
Is
it
valuable
in
to
Baker
cells,
yeah.
C
C
We're
going
to
also
talk
to
the
other,
as
we
said,
link
with
the
other
IETF,
an
IRT
F
group
that
do
the
same
thing:
we're
thinking
of
a
potential
virtual
meeting,
but
we'll
send
that
to
the
list
again
and
we
plan
to
meet
in
Montreal
when
we
it's
going
to
be
great
against
my
hometown
and
end
of
July
is
fantastic.
So
thank
you
so
very
much
for
coming.
This
I
think
is
a
great
first
meeting,
we're
very
happy
that
we
were
accepted
as
a
proposed
RG.