►
From YouTube: IETF108-COINRG-20200731-1410
Description
COINRG meeting session at IETF108
2020/07/31 1410
https://datatracker.ietf.org/meeting/108/proceedings/
B
I
will
download
them.
Do
you
mean
at
the
very
end
I
can.
I
will
get
them
in
the
interim
okay.
Why
don't
we
begin.
C
Okay.
Okay,
so
you
want
me
to
begin.
Okay,
yeah,
go
ahead,
okay,
hello,
everyone
and
welcome
to
this.
Actually,
this
last
session
of
the
week,
which
has
been
kind
of
interesting
to
have
everybody
in
virtual.
C
This
is
computing
in
a
network
coin.
Our
three
gems
are
jeffrey
eve.
Who's
on
the
call-
and
me
jeffrey,
is
having
major
data
tracker
issues,
so
he
may
join
us
a
little
bit
later
next
slide.
C
So
you
know
we
have
the
usual
so
by
the
way
we
are
recording
this.
So
do
this.
If
you
don't
want
to
be
recorded,
don't
talk,
we
have
the
meeting
material,
we
have
an
ether
pad
and
obviously
there's
the
jabber.
The
jabber
is
also
reflected
on
the
on
the
the
chat
for
this
session.
C
We
do
not
have
a
note
taker
right
now,
however,
in
another
meeting,
what
we
did,
we
used
the
recording
to
do
the
minutes,
but
if
somebody
want
to
take
some
notes
and
send
them
to
us
after
it
would
be
really
great
so
again,
the
recording,
I
think,
would
meet
deco
to
have
the
video
off
and
to
wear.
Headphones
is
kind
of
easy
and
again
keep
yourself
muted
if
you're
speaking
and
we're
monitoring
everything
to
see
if
there's
questions
or
comments
next
slide.
C
So
this
is
the
intellectual
property.
I
think
this
group
didn't
have
a
lot
of
problems
with
that
up
to
now,
but
you
know
please
read
this
and
make
sure
that
when
you
present
something
that
you
are
very
much
in
agreement
with
the
ipr
disclosures,
I
would
say
for
for
me
the
most
important
one
is
a
timely
matter.
C
Again,
it's
privacy
and
code
of
conduct
we're
also
very
much
into
that
and
making
sure
also
that
everybody's,
very
respectful
of
one
another
and
of
also
the
whole
meeting
next
slide.
C
So
this
is
again
very
generic.
The
the
goal
of
the
irtf-
I
think
it's
the
one
that's
highlighted-
is
the
most
important
one.
We
do
research,
we
do
not
do
standards,
so
I
think
it's
important
to
know
that
the
work
that
is
presented
here,
even
those
that
are
drafts,
will
end
up
in
informational
and
not
as
a
standard
track
next
slide.
C
We
have
a
very
generic,
I
would
say
goal
and
I
think
it's
evolving,
because
this
field
is
also
evolving,
but
it's
still
very
much
the
fostered
research
in
computing
and
the
network
in
order
to
improve
performance,
to
improve
applications
and
to
improve
user
experience,
we're
not
very
limited
in
terms
of
scope.
We
want
to
look
at
architectures.
C
We
want
to
look
at
protocols,
protocols,
meaning
everything
from
transport
layer
issues
to
security,
and
we
want
to
look
at
real
world
use
cases,
application
and
work
in
progress
and-
and
I
think
more
and
more
in
terms
of
architecture-
we've
been
also
associated
with
some
new
new
ways
of
looking
at
the
internet
and
officially
you
know
we're
breaking
the
end-to-end
paradigm.
So
you
know
there's
ways
of
of
looking
at
what
we're
doing
and
I've
seen
papers
ex
in
anwar
this
week.
That
actually
have
also
addressed
some
of
these
issues.
C
We
want
to
obviously
the
focus:
it's
not
a
focus,
but
it's
actually
it's
not
too
focalized,
but
it's
the
focus.
It's
the
core
to
edge
compute
continuum,
and
I
think,
while
computing
a
network
really
started
with
data
center
applications
and
services,
we
can
see
that
more
and
more
is
moving
to
the
edge
and
it's
moving
also
which
what
I
call
the
cloud
enabled
edge
or
the
cloud
supported
edge
where
you
essentially
have
applications
that
communicate
at
both
ends
next
slide.
C
So
we
have
a
pretty
dense
agenda
and
I've
already
taken
too
much
time,
but
it's
okay.
Yes,
you're.
C
We'll
look
very
fast
at
the
drafts
and
the
milestone
lists,
and
I
think,
probably
for
for
sake
of
time,
we'll
send
the
plans
for
updating
the
milestones
which
we
would
like
to
do,
we'll
send
it
to
the
list
and
we
plan
to
have
probably
an
interim
where
we
could
also
do
that.
We're
going
to
have
a
number
of
presentations.
The
first
two
ones
are
very
much
into
requirements
and
directions
from
china,
mobile
and
then
from
from
dirk
and
yorg.
C
There's
going
to
be
quite
a
bunch
of
data
discovery.
I
think
a
very
generic
problem,
then
the
edge
impact
of
that
and
then
the
mobile
next
is
fine.
C
The
industrial
use
case.
This
is
actually
an
update
and
I
would
say,
from
the
previous
two
three
presentation,
dirks
eaves,
and
this
one.
I
think
we
will
probably
start
looking
at
adopting
them
as
rg
drafts.
C
Then
I'll
I
don't
have
a
draft,
but
we've
done
work
with
edgar
ramos
and
ericsson,
roberto
and
ev
on
on
some
kind
of
data
common
data
layer,
and
I
want
to
express
that
and
we're
probably
going
to
have
draft
later
and
if
we
do
have
time,
we
want
to
have
some
discussion
again
approving
some
working
group
items
in
interim
the
next
ietf-
and
I
don't
think
we'll
have
time
for
open
discussion.
But
let's
be
positive
here
next
slide.
B
Yeah
I
mean
this
was
just
to
share
with
people
the
drafts
that
are
affiliated
with
the
group
at
this
point,
and
many
of
them
will
be
presented
today
and
in
fact,
here's
a
different
incarnation
of
that
showing
the
drafts
that
will
be
presented
today.
The
red
drafts
are
the
ones
marked
in
red.
Are
new
drafts
there'll
be
several
that.
C
Okay,
so
I
I
actually
did
a
milestone
review
and,
while
we
didn't
have
like
you
know
real
kpis,
you
know
like
how
we're
going
to
evaluate
where
we
are
so
we
I
think
I
will
not
go
through
the
whole
list.
I
think
we've
achieved
or
partly
achieved
a
few.
C
I
think
we
need
to
basically
address
some
of
the
implications
of
coin.
I
think
there's
been
discussions.
There's
been
a
lot
of
discussions,
I
would
say
in
the
group
and
outside
the
group
on
on
the
impacts,
but
we
haven't
really
put
them
in
in
concrete.
So
maybe
this
is
something
that
we
want
to
start
addressing,
because
I
think
it's
still
an
important
milestone.
C
I
think
what
we
have
been
really
good
at
is
you
know
looking
at
what's
happening
in
the
landscape,
what
are
some
of
the
current
applications?
What
are
some
of
the
challenges,
because
we
needed
that
to
get
iab
approval
anyway,
so
we've
been
pretty
good
with
that.
I
think
the
last
one
which
is
for
actually
the
next
iegf
is
something
that
maybe
we
could
start
looking
at
is
working
toward
a
better
scope,
actually
scoping
it
better.
C
Like
I
said
we
we
started
by
being
all
encompassing
because
we
wanted
to
do
the
whole
edge
cloud
continuum,
but
there's
probably
things
that
could
could
stand
out
and
I
think
we
would
like
to
do
that
now
and
also
probably,
we
will
send
that
to
the
list.
There's
currently
70
people
in
the
queue
I
know
there's
more
than
that
in
the
in
the
list,
and
and
maybe
having
suggestions
for
new
milestones
milestones
that
should
be
changed.
C
I
don't
know
if
you
follow,
but
there's
a
lot
of
discussions
about
data
filtering,
for
example.
That
would
include
even
I
are
really
good,
really
keen
on
that
data
filtering
at
the
edge,
and
that
goes
with
some
of
the
work
we've
done.
That
includes
metadata
and
essentially
to
have
better.
You
know,
function
and
data
discovery.
C
So
I
think
we're
going
to
send
that
to
the
to
to
the
list
and
start
working
on
that
for
for
the
next
ietf-
and
I
would
say
where
we
are
trailing-
is
the
catalog
of
coin
requirements
and
and
applications.
I
think
it's
hidden
somewhere
in
a
lot
of
the
drafts
and
in
a
lot
of
the
references
that
we
have,
but
I've
put
that
on
my
list
of
things.
You
know
where
we
don't
go
anywhere
these
days,
so
we
have
a
lot
of
time.
C
A
B
B
C
Is
is
the
are
the
people
from
china
mobile
online.
C
D
Okay,
this
is
a
poem
from
china.
Mobile
yeah
is
that,
okay,
I
will
introduce
draft
of
requirement
of
computing
in
network
and
differential
results,
reservation,
yeah,
and
so
first
is
the
requirements.
D
In
last
version,
we
categorized
the
requirements
into
network
computing
and
management.
The
major
change
from
last
meeting
adding
the
requirements
of
the
computing
and
management,
including
the
community
results,
reservation
computing,
results,
oem
and
service
consistency,
and
we
also
consider
some
existing
protocol
might
meet
part
of
the
requirement.
D
So
in
order
to
satisfy
their
demands,
network
may
not
only
need
to
reserve
bandwidth
results,
but
also
reserve
computing
results
and
there
might
be
a
serial
distributed
computing
model
of
computing
in
the
network
and
the
difference
results
need
to
be
reserved
from
different
nodes.
For
example,
yeah
algorithm
also
has
a
mode
of
step
by
step
iteration
at
multiple
nodes.
The
previous
iteration
will
affect
the
next
result
and
computing
results
required
for
each
iteration
another
thing
and
we
think
maybe
the
sfc
also
have
the
same
process.
D
So
next
is
oem
okay,
which
might
be
considered
early,
but
what
can
be
mentioned
is
that
oem
of
computing
results
is
more
complex
than
network,
because
network
monitoring
is
relatively
simple,
like
bandwidth
latency
data,
while
greeting
can
be
divided
into
many
categories,
different
application,
a
different
kind
of
computing,
so
it
need
to
inflate
a
file,
a
green
oem
or
others
to
the
results
and
the
service.
Consistency
refers
to
the
multi-user
access
use
case.
D
E
D
Otherwise,
it
will
seriously
affect
the
experience
of
the
users,
and
so
the
service
consistency
can
be
achieved
through
network
management
or
application
layer,
control,
yeah.
D
So
those
are
the
added
requirements
of
the
new
version
draft
and
next
we
will
introduce
the
differential
preservation
this
draft.
It's
considered
the
problem
of
the
computing,
especially
the
in
the
serial
distributed
mode.
D
The
calculation
process
of
serial
serial
distribution,
algorithm
is
sequential
and
the
required
result
of
the
previous
calculation
need
to
be
used
in
the
later
calculations.
So
it
will
bring
the
following
two
problems
and
different
computing
nodes
on
the
same
path
need
different
reserved
computing
results
and
the
bandwidth
results
to
be
reserved
may
different
after
the
prop
from
previous
calculate
in
the
same
path.
So
the
typical
example
is
introduced.
Okay,
sorry
introduced
province.
The
picture
shows
the
task
may
be
divided
into
three
sub
tags
and
the
network
device.
D
One
three
and
the
server
may
compute
a
part
of
the
maybe
20
percent
and
30
percent
and
the
rest
half
of
the
computing
task.
D
So
on
some
existing
protocol,
such
as
as
the
vp
and
pcp,
can
be
used
to
reserve
bandwidth
results
and,
as
ap
is
a
traditional
protocol
which
only
focus
on
how
to
initiate
the
reservation
of
results,
not
its
establishment
of
the
past
and
its
same
as
rsvpt
yeah
and
the
pc
ep
was
designed
to
separate
the
past
calculation.
The
past
attack
establishment
functions
of
sovp,
firstly,
which
means
that
the
past
calculation
part
before
results
reservation
can
be
realized.
D
Okay,
so
we
have
two
reference
method:
one
is
distribute,
distributed,
resource
preservation
and
the
process
is
as
follows:
it's
just
as
as
we,
but
we
can
be
realized
by
defining
a
new
object
of
vp.
This
example
of
it
we
can
add
the
bandwidth
information
for
it
and
the
second
is:
can
centralize
the
reference
method
by
the
pcp
protocol
or
we
can
realize
by
the
netcar
protocol.
D
Maybe
we
can
define
a
new
young
module
to
realize
it
yeah,
so
the
next
steps
we
want
that
more
analysis
of
technology
and
the
research,
the
actions
of
requirements
that
can
contribute
to
the
allergy,
and
we
want
to
give
more
research
directions
to
to
our
work
and
the
of
the
discussion
about
the
the
trends
and
technologies
overcome.
So
if
you
want,
if
you
interested
in,
please
join
yeah,
that's
all!
Thank
you.
D
F
Comments,
hey
hello.
This
is
dirk
yeah
thanks
for
the
presentation,
so
I
just
had
a
quick
look
at
the
draft
again,
and
so
it
seems
that
you
are
addressing
a
very
specific
computing
in
the
network
scenario,
and
so
I
from
the
presentation
I
can
kind
of
guess
what
it
is,
but
I
think
it
could
be
worthwhile
spelling
that
out
more
explicitly
in
the
ground,
because
other
people
may
have
different
assumptions
and
different
scenarios.
So
it's
a
bit
difficult
to
talk
about
requirements
in
general.
D
B
And
marie
jose,
I'm
not
seeing
you
either.
B
C
B
And
dirk,
are
you
wanting
to
send?
I
don't
see
that.
C
Okay,
so
yeah-
and
I
agree
with
with
dave's
yeah
you.
F
Okay,
great
yeah,
so
let
me
know
if
anything
goes
wrong.
I
can
only
see
my
my
pdf
viewer
right
now.
Oh.
B
F
F
Okay,
in
that
we
started,
I
think,
last
year,
so
on
the
draft
that
we
call
directions
for
coin
so
bit
trying
to
lay
out
the
the
research
field
and
describing
a
few
potentially
interesting
topics
and
and
challenges
we
just
updated
in
this
draft
last
night,
so
sincere
apologies,
we
just
didn't
get
to
it
any
earlier
so
yeah.
I
don't
expect
you
to
have
read
this
now.
F
So
just
very
quick
reminder,
so
the
attention
of
this
draft
is
also
to
discuss
a
little
bit.
So
what
is
actually
so
the
interesting
topic
that
well,
we
are
concerned
here
in
coin,
so
what
does
a
network
really
mean
for
a
network
com
computing?
As
we've
already
seen
there?
There
are
different
perspectives,
different
options,
and
so
we
also
discuss
a
new
bit,
so
if
you
say
buy
into
the
idea
that
you
know
computing
should
be
should
play
a
more
prominent
role.
F
So
what
are
we
actually
talking
about?
So
what?
What
is
the
unit
of
computation?
That
is
interesting,
for
example,
and
what
are
execution
environments?
How
could
you
logically
organize
this,
and
yes
also
trying
to
give
some
suggestions,
what
we
should
look
at
here
in
in
corner
g,
and
so
I
presented
this
at
some
length
at
earlier
meetings.
I'm
not
going
to
do
this
again,
but
so
one
perspective
of
what
in-network
means
is
well.
First
of
all
I
mean
we
have
to
acknowledge
that.
F
Well,
there
is
already
computing
in
the
network
today
so
and
it's
getting
also
more
so
the
way
that
cdns
are
extending
to
the
edge,
for
example.
The
question
is:
how
much
research
do
these
overlay
systems
need?
So
I'm
not
denying
that.
I
mean
there
could
be
interesting
aspects,
but,
on
the
other
hand,
well,
there
are
also
many
people
looking
at
these
things
already,
and
so
our
perspective
has
been
a
little
bit
to
to
try
to.
F
You
know,
find
other
topics
for
computing
or
other
other
perspectives
that
are
a
bit
less
on
the
engineering
side,
and
so
one
perspective
that
we
are
particularly
interested
in
doesn't
have
to
be.
The
only
one
is
to
basically
try
to
figure
out
how
we
could
maybe
have
a
joint
perspective
on
networking
and
computing.
So,
for
example,
you
are
aware
that
there
are
all
these
really
interesting
and
also
very
successful
application
layer
stream
processing
frameworks,
for
example,
that
are
running
at
in
overlays
today.
F
F
So
this
draft,
just
as
a
quick
reminder,
is
discus
discussing
these
different
types
of
integral
computing
systems
that
it
tries
to
introduce
some
terminology
that
just
that
would
be
moving
in
this
draft
and
if
you're
not
trying
to
characterize
computing,
a
network
versus
packet
processing
versus,
say,
more
familiar
network
computing
concepts.
F
We
have
two
examples,
so
one
is
the
the
cfni
icn
work
that
I
presented
earlier,
and
so
we
added
a
new
section
on
the
the
akka
toolkit.
So
it's
an
an
actual
based
distributed
computing
system
that
is
used
in
quite
many
frameworks
and
then
at
the
end,
we
are
printing
some
challenges
more
in
terms
of
research
fields.
Right
now,
so
the
the
updates
from
from
last
night
that
that's
the
new
section
on
on
aka.
F
So
if
you
don't
know
this,
it's
I
think
it's
a
good
say
post
that
shot
for
a
widely
successful
application
layer
toolkit
that
in
this
case,
it's
providing
actor-based
disabled
computing
with
reactive
programming,
and
so
it's
quite
quite
interesting
and
it's
also
quite
widely
used.
F
And
then
we
added
a
new
challenge
on
coordination,
which
is
obviously
major
topic
in
many
distributed
computing
systems
for
all
kinds
of
services
and
functions
here.
We
think
that
the
fundamental
mechanisms
are
actually
well
understood
that
there
has
been
lots
of
progress
in
that
field.
F
However,
when
we
discuss
systems
we
think
it's
still
interesting
to
yeah
reflect
so
how
coordination
is
actually
achieved
and
what
mechanisms
are
used.
So
it
kind
of
also
relates
to
the
system
design
quite
a
lot,
and
then
we
did
some
some
couple
minutes.
F
Yeah,
I'm
almost
done,
and
then
we
had
some
some
minor
fixes,
usually
optimizations.
So
quick
injection
here,
like
a
bit
more
on
the
say,
general
coin
direction.
Discussion.
F
It's
actually
not
really
pronounced
very
directly
in
in
the
draft
yet,
but
when
we
talk
about
computing
in
the
network,
so
we
think
it's
really
more
than
just
you
know,
forwarding
packets
to
nodes
that
you
know
happen
to
host
vms
or
processes.
F
So
I
think
this
can
can
be
done
already
and
there
are
all
kinds
of
solutions
for
that,
and
so
the
interesting
potential,
at
least
in
our
view,
is
to
really
embrace
this
idea
of
supporting
distributed
computing
by
leveraging
networking
concepts
and
mechanisms,
so
not
just
building
better
pipes
but
trying
to
maybe
also
take
a
stake,
a
step
back
and
think
about
redesigning
systems,
and
in
that
context,
for
example,
we
don't
really
think
that
say
enhancing
tcp
to,
for
example,
support
or
network
computing.
I
don't
think
that
a
very
promising
direction
so
first.
E
F
It
seems
kind
of
incorrect
from
an
end
to
end
semantics
perspective
and
you're.
Also,
inheriting
much
of
the
constraints
that
the
tcp
flow
end-to-end
flow
model
incurs
security-wise,
it's
a
problem.
These
things
will
get
more
difficult
when
quick
is
deployed,
so
we're
not
sure
that
that's
a
good
direction
for
for
a
research
group
like
connergy
and
yeah
finally
suggestion
for
coin.
F
What
we're
actually
talking
about
so
we're
not
saying
that,
for
example,
our
perspective,
the
one
that
I
talked
about
earlier
is
the
only
interesting
one,
but
I
think
it
could
be
problem
if
we
kind
of
treat
this
all
as
the
same
thing,
whereas
actually
the
like
perspectives
and
approaches
are
quite
different.
F
So
that
could
be,
I
think,
a
good
contribution
from
from
this
group
in
the
first
place
and
then
also
really
understand
where
we
actually
need
new
research,
and
so
that's
our
kind
of
little
suggestion
for
the
group
and
for
this
draft.
So
we
have
a.
I
think.
F
Okay,
so
so
this,
and
just
keep
this
for
your
reference,
so
there's
quite
a
lot
to
do,
but
I
think
we
have
ideas
what
to
do
here
so
question
to
the
group
and
the
chairs
yeah.
What
do
you
think
about
this
activity?
Is
that
useful?
So
is
it
a
good
basis
for
framing
the
scope
and
directions?
What
should
we
do
with
this?
C
Thanks
that
was
terrific
yeah,
it's
terrific!
We
before
answering
your
question
about
your
future
of
the
of
the
track.
We
have
two
people
in
the
queue
so
uma.
You
want
to
talk.
C
G
G
So
are
you
looking
for
the
scenarios
where,
like
active
network
kind
of
thing
with
compute,
is
required,
whether
with
with
the
header
class,
what
are
the
other
keys.
F
Yeah,
so
I
mean
the
draft
itself
is
describing
like
different
approaches
or
different
ways
how
one
could
perceive
in
network
computing,
and
the
one
alternative
that
we
are
most
interested
in
is
trying
to
look
into
distributed
computing
and
see
how
that
could
be
improved
by
moving
it
from
a
pure
overlay
approach
to
a
better
integrated
system
with
the
network
and
or
at
least
leveraging
concepts
and
mechanisms
from
networking
and
today.
F
So
again,
this
doesn't
have
to
be
the
only
perspective.
We
also
have
work
here
in
this
group
on,
say
other
ideas
say
more
on
data
plane
probability,
and
so
what
we
are
advocating
is
more,
let's
really
be
specific,
on
the
different
approaches
and
to
don't
mix
them
and
then
thereby
ignoring
the
different
properties,
but
have
this
taxonomy
and
then
having
that
we
can
probably
find
out
and
discuss
whether
there
are
commonalities
that
make
sense
to
maybe
develop
common
concepts
or
not.
F
But
I
think
we
have.
We
are
have
done
a
good
start
to
some
extent,
but
I
think
that
we
could
kind
of
do
a
bit
more
in
this
direction.
Okay,
thank
you.
Yes,
thank
you.
B
Interesting
time,
I
think
of
an
interesting
time,
though,
we
should
probably
move
the
rest
of
the
questions
to
the.
A
Mailing
list,
okay,
but
I
think.
C
C
I
know
the
doesn't
really
work,
so
we
won't
do
that,
but
we'll
send
it
to
the
list,
but
this
was
the
one
of
the
candidates
for
rg
adoption,
and
I
think
we
will
ask
the
question
to
the
list
very
soon
after
this
meeting.
Thank
you
dirk.
I
C
I
I
See
everything?
Okay,
yes
great!
So
this
is
a
new
draft.
We
have
a
couple
other
drafts
that
will
be
shared
after
this
one.
That
goes
into
specific
detail
on
different
deployment
type
scenarios,
but
the
five
of
us
have
been
working
on
data
discovery,
type
of
research
for
the
last
year
or
so,
and
we
decided
to
back
up
just
a
little
bit
create
a
new
draft.
I
I
There
are
many
existing
ways
to
find
data
that
our
proprietary
aws
has
their
own
ways
to
find
data.
They
have
some
good
solutions
like
macy
and
others,
but
we
don't
know
of
any
standardized
way
of
being
able
to
do
so
and
that
this
data
may
be
cached
and
copied
and
stored
in
many
locations
throughout
a
network
on
route
to
its
final
destination.
It
may
be
at
its
final
destination.
I
I
The
location
of
each
data
store
is
probably
the
first
level
discovery
problem
and
then
the
details
of
the
databases
directory
is
is
likely.
The
second
level
discovery
problem,
and
since
there,
as
I
mentioned,
there'll,
be
several
data
discovery
address,
including
some
that
will
be
just
presented
right
after
this
one.
We
created
this
more
general
problem
statement
which
serves
as
an
anchor
to
the
overall
topic
without
going
into
specific
details,
and
if
we
do
wind
up,
jumping
the
fence
over
to
the
iets
someday
and
it
we
request
above
then
we'll
be
ready.
I
With
this
problem
statement,
this
idea
did
evolve
out
of
a
edge
computing
side
meeting.
We
identified
many
different
gaps
in
that
side.
Meeting
and
edge
data
discovery
was
one
was
one
of
them
and
that's
why
we're
even
talking
about
this
so
right
now
we
have
three
related
data
discovery
drafts
this
one.
We
have
one
that
eve
will
be
discussing
after
this
one:
that's
edge,
specific
and
there's
also
a
new
mobile,
related
data,
discovery
draft
and
there's
other
ones
that
we
may
want
to
be
specific
to
icn
or
any
other
type
of
a
solution.
I
Just
to
give
you
the
idea
of
kind
of
how
this
all
started.
There
was
a
variety
of
use
cases
that
we
identified.
The
elevator
use
case
was
one
of
them,
where
you've
got
hundreds
of
sensors
on
an
elevator
and
it's
sensing,
all
sorts
of
things,
from
vibration
to
temperature
or
whatever,
and
that
data
is
being
sent
to
different
places
in
the
network.
It
could
be
on
the
edge
it
could
be
in
the
cloud.
It
could
be
a
dynamic
and
memory,
and
we
that
data
needs
to
be
discovered
and
searched
in
a
certain
way.
I
So
what's
next,
we
need
to
determine
if
existing
protocols
will
work
to
solve
what
we're
what
we
want
to
have
done.
If
not,
then
we
need
to
target
where
a
new
standard
protocol
is
needed
to
discover
data,
and
maybe
again
we
continue
to
work
on
it
in
coin,
which
does
seem
very
reasonable,
or
maybe
we
try
to
create
a
buff,
and
so
that's
that's
where
we're
at
any.
I
We
don't
know-
and
we
just
floated
the
idea
that
I
didn't
even
know
there
were
buffs
on
coin.
I
was
thinking
about
itf.
B
B
B
Okay,
great,
let
me
be
quick,
then,
because
this
is
you've
heard
about
the
problem.
The
way
this
relates
to
edge
computing,
of
course,
is
that
the
more
compute
in
the
network,
the
more
the
data
gets
distributed
and
is,
is
dispersed
because
it's
created
processes,
process,
stored
and
so
forth,
and
so
it
is
sort
of
scattered.
The
way
this
relates
to
coin,
of
course,
is
that
computation
requires
both
often
requires
data
input
and
results
and
data
output,
and
so
this
is
why
this
is
relevant.
B
One
of
the
issues
that
we
had
was
we
felt
like.
We
still
have
a
somewhat
anemic
references
section,
and
this
is
why
michael
went
off
and
started
looking
at
related
work
and
discovered
that
you
know
maybe
there's
a
more
general
problem
here,
but
in
terms
of
this
draft
we
had
just
a
couple
of
tweaks.
B
We
did
some
clarification
to
the
use
case
that
service
function
chaining
and
then
we
added
a
use
case
that
we
have
affectionately
referred
to
as
ubiquitous
witness,
which
is
the
idea
that
in
dense
iot
deployments,
there
are
many
many
cameras
and
sensors,
and
so
when
an
interesting,
unique
or
unusual
event
occurs
or
an
anomaly
occurs,
that
there
are
many
sensors
cameras,
etc,
that
witness
it
and
that's
why
it's
called
ubiquinous
witness,
because
there's
there's
dense
sensing,
so
we
added
that
case
here
we
also
began,
you
know,
have
the
very
beginnings
of
a
security
considerations
section
and
acknowledge
those
who've.
B
Given
us
feedback,
we
also
have
added
a
new
author,
diego
has
joined
us.
There
were
some
really
detailed
edits
that
greg
skinner
contributed
to
the
list.
So
I
thank
him.
B
The
the
interesting
bits
about
the
ubiquitous
witness
use
case
that
were
added
is
that
data
is
because
its
location
driven.
You
know
these
witnesses
are
all
within
some
interest
region
of
each
other
and
are
typically
reporting
on
something
within
some
small
time
window
of
each
other.
B
The
data
is
all
contextually
related
and
that,
as
data
comes
in,
if
you
want
to
somehow
find
it
given
that
it
may
be
scattered
in
at
all
of
these
other
devices
or
multiple
places
that
you
know,
you
may
need
to
name
your
data
so
that
you
can
subsequently
search
for
it
and,
furthermore,
that
at
these
connect,
collection
or
aggregation
points,
the
data
may,
because
it's
contextually
related,
you
may
be
able
to
process
it
collectively,
and
you
might
want
to
do
that
before
you
pass
it
further
up
or
to
other
places.
B
You
might
want
to
say
stitch.
Your
video
together
or
re
reduce
the
the
numbers
of
data
streams
that
are
reporting
on
something
to
only
the
best
and
so
forth.
We
in
terms
of
the
security,
as
I
said,
the
function
chaining
is
all
about
clarification,
but
the
security
section
was
mostly
to
acknowledge
that
what
we
are
expecting
is
that
data
will
have
associated
policies
that
will
make
discoverability
a
function
of
these
access
controls
and
that
we're
not.
B
We
also
stated
that
the
assumptions
around
whether
or
not
all
edges
or
edge
clouds
share
their
data
or
make
their
data
public
is
should
not
be
an
assumption.
Nor
should
the
assumption
be
that
all
clouds
are
somehow
keep
their
data,
private
and
so
again,
very
modest
beginnings,
but
a
beginning
for
the
security
section.
And
then
after
we
published,
we
received
some
additional
comments
from
lisha
around
david
data
provenance
and
further
definitions
of
edge
computing
that
we
expect
to
add
to
the
next
version.
So
I
think
with
that.
B
B
J
J
So
a
first
set
of
challenges
is
related
to
scalability.
We
need
to
keep
wireless
resource
usage
low
for
mobile
devices
and
also
there
are
a
few
additional
considerations.
Considerations,
sorry
like
multicast
is
expensive
on
wireless
networks
and
in
dense
areas,
mobile
devices
can
generate
a
constant
level
of
traffic.
You
know
from
churn
so
as
examples
of
mechanisms
you
know
to
provide
some
context
here.
J
We
have
pre-attachment
discovery
in
80211aq
published
in
2018,
and
we
have
also
edge
computing
discovery
in
5g,
which
is
a
work
in
progress,
so
just
to
go
a
little
bit
in
it
shortly
for
at
least
for
the
first
stage
of
discovery,
which
is
what
we
care
about
most
in
the
for
the
mobile
context.
A
mobile
device
provides
some
information
to
a
discovery
service
in
the
control
plane
or
in
the
mobile
the
mobile
operators
network.
J
The
request
includes
some
some
information,
like
application,
client
type
expected
location,
requested,
qos
and
others,
and
the
discovery
service
provisions
the
next
stage
discovery
server
on
the
mobile
device,
so
that
is
specified
as
a
data
network
name
so
because
that
next
stage
server
is
in
a
data
network
with
the
other
application
servers
and
a
uri
to
connect
to
that
next
stage,
discovery
server.
So
then
you
know
the
mobile
device
can
basically
establish
the
control
the
data
plane
connection
and
continue
discovery
right.
J
A
mobile
device
also
needs
to
determine
which
network
interface
or
data
network
to
use
for
initial
discovery
or
when
relocating
so
a
mobile
device
may
need
to
choose
between
connections
not
yet
established
right.
So,
as
in
the
case
we
just
have
seen
in
the
previous
slide
for
centralized
discovery
in
a
managed
network,
a
control
plane
server
can
tell
which
data
network
to
use
right
now
for
a
more
distributed
discovery
method.
J
We
can
use
passive
and
active
discovery
methods
in
the
passive
discovery
methods.
We
have
provisioning
domains
that
exist,
router,
advertisements,
dhcp,
signaling
and
those
methods
you
know
typically
need
the
data
plane
connections
to
be
up
unless
you
can
provide
them.
You
know
prior
to
that
time,
so,
for
example,
provisioning
domains
could
be
deployed
using
policy
rules,
but
that
is
not
done
today
and
active
discovery.
J
Methods
like
the
nssd
or
mdns
can
be
used
as
a
second
step,
or
you
know
basically,
basically
can
be
also
used
concurrently
over
multiple
interfaces
and,
finally,
the
service
continuity
needs
to
be
maintained
when
a
provider
or
consumer
moves
to
a
new
location
and
that
can
impact
the
service
level.
So,
for
example,
you
could
lose
frames
when
processing
a
video
stream
when
a
mobile
device
moves,
for
example,
now
strategies
among
very
common
and
high
level
strategies
we
can.
First,
we
can
reconnect
to
the
same
size
instance.
J
We
could
reconnect
to
the
same
instance
actually
migrate
or
use
multipath
or
basically,
you
could
also
discover
a
new
instance
and
use
it
concurrently
or
as
a
replacement.
So
one
question
is,
you
know:
could
this
discovery
process
help
inform
this
choice
of
strategy
for
a
session
continuity
right?
So,
to
conclude,
and
as
a
starting
point
for
a
discussion
that
we
can
have
after
you
know,
we
could
consider
mobility
related
requirements
among
others.
You
know
for
data
and
service
discovery
in
coin.
J
C
A
question
from
dave
on
the
chat
about
this.
I
I
don't
know
if
it
applies
to
this
presentation,
the
previous
one,
but
I
think
it's
all
related
the
description,
the
the
distinction
between
in
the
network
and
on
the
network.
C
Does
anyone
of
you
want
to
comment
on
this?
Also,
my
question:
while
I'm
on
it's
not
clear
to
me:
what
are
the
research
elements
and
your
presentation-
and
I
may
have
missed
it,
so
two
questions,
one
between
the
distinction
between
n
and
on
the
network
and
the
other
one
about
what
are
the
research
challenges.
J
A
Oh,
I'm
sorry
to
say
I
was
looking
at
the
jabber
window
with.
Can
you
repeat
your
question?
There's
a
question
from
dave:
it's
actually
the
question
about
being
in
the
network
and
on
the
network.
Oh,
and
I
think
it
was
recent,
so
I'm
assuming
david
was
about
this
current
presentation.
B
That
was
what
I
was
just
reading.
Actually,
I'm
going
to
read
it
out
loud
since
dave
doesn't
seem
to
have.
You
have
audio
clarification
question
comment.
We
seem
to
not
have
a
chris
distinction
between
in
the
network
and
on
the
network,
I'm
having
a
bit
of
difficulty
teasing
out,
which
is
which,
in
this
draft,
I
presume
that
you
are
talking
since
this
came
in
at
802.
B
To
the
current
to
xavier's
draft
and
more
generally
doesn't
matter,
my
intuition
is
that
it
does
matter
so
not
excluding
stub
endpoints
in
the
absence
of
some
distributed.
Computing
elements
involved
in
the
communication
seems
a
big
expansion
of
the
scope
of
coin
rg.
Yes,
not
excluding
stub
egg
endpoints.
A
B
Have
he
can
speak
now,
he's
got
audio?
Is
that
what
I
lish
is
telling
me
as
well.
B
C
And
actually,
I
think
the
next
one.
I
can
actually
ask
a
large
question.
Oh
here's
lawrence
go
ahead.
K
B
Can
you
speak
a
little
louder.
K
I
I
can
I'm
speaking
pretty
loud
and
it
worked
before
so
I
haven't
really
paid
much
attention
to
coronergy
since
it
got
chartered,
but
I
I'm
so
surprised
that
the
scope
seems
to
have
ballooned
quite
a
bit
from
what
the
charter
talks
about,
and
so
I
thought
the
intent
was
to
figure
out
how
programmable
network
hardware
that
can
operate
on
data
will
be
sort
of
you
know
best
used
in
in
potentially
in
future
internet.
A
B
Yes,
and
I
think
that,
as
a
result
of
the
in
the
network
issues,
this
is
why
marshalling
data
from
somewhere
and
figuring
out,
where
to
place
data
after
computation
and
if
it's
persistent,
you
know
the
output
from
computation
is
really
what
the
original
edge
discovery
edge.
Data
discovery
draft
was
about.
So
it
was
kind
of
this
ancillary,
but
attendant
problem
related
to
the
fact
that
compute
requires
I
o
and
then
it's
kind
of,
and
then
the
other
two
drafts
are
new.
K
Yeah
that
confuses
me
right,
because
maybe
I'm
coming
through
a
different
world,
but
we're
our
customers
right
there,
basically
laying
out
pretty
carefully
where
the
data
pipeline
is
going
to
be
between
the
edge
and
and
the
the
the
the
core
and
then
the
cloud
right
and
and
so
the
idea
that
data
would
sort
of
just
sort
of
float
around
and
you
would
have
to
find
it
is,
is
kind
of
weird
right,
because
that
that's
not
how.
B
You
have
a
container
driven
orchestration
model
on
what
compute
in
the
network
means
then
sure,
but
even
containers
the
persistence
of
data
beyond
container
session
sessions
or
even
optimizations
for
containers
that
might
use
the
output
from
previous
computations
and
so
forth.
Again,
like
those
things
made,
move.
K
Forward,
that's
the
purpose
of
data
for
it
right
that
you're
dumping
this
persistently,
because
you
want
to
you,
reuse
it
and
have
a
visitor,
the
hey.
I
need
to
take
yeah
awesome.
Somebody
got
a
new
dinosaur
book,
so
the
the
you
know
you
you
don't
if
you're
on
a
container
you're,
not
just
like
let
the
data
sort
of
dissipate
into
the
network,
and
then
you
sort
of
hope
that
will
find
it
again
in
the
future.
That's
like
seems
pretty
sub-optimal.
B
Again,
you
have
a
you,
have
a
container
model
about
where
the
data
originates
and
that
everything
is
pre-configured.
But
if
things
are
happening
on
the
fly
you
know.
So
the
example
that
I
gave
about
ubiquitous
witness
was
that
data
is
coming
through
a
programmable
switch.
B
Let's
say,
and
it
recognizes
that
all
of
this
data
is
contextual,
contextually
related
and
it
decides
to
do
a
computation
on
it,
so
it
was
not
so
maybe
this
is
something
that,
through
ai,
it
discovers
so
it's
not
container
driven
so
that
that
was
kind
of
the
counter
argument
too,
that
it's
not
pre-configured,
but
we
are.
We
are
running
rather
late
in
our
that's
okay
and
there
are
a
couple
other
people
in
the
queue
mauricio
zan
going
to
leave
it
to
your.
Let's
continue.
A
A
L
L
C
Okay,
tourist.
M
M
You
know
in
life
says
it
to
be
within
the
scope
and
the
closest
one
that
wouldn't
be
within
scope
right
and
so
the
explanation
about
contextual
data
right
the
way
I
would
phrase
it
as
existing
networking
things
is:
there
is
all
type
of
a
dpi
that
is
moving
on
traffic
and
could
then
establish
you
know
even
changes
to
the
traffic
like
in
firewalls
in
inspection
units
and
so
on
right.
So
that's
an
existing
interface
between
fairly
rich
computation
and
the
forwarding
plane
right.
M
J
Thanks
and
just
a
words
just
a
few
words
to
answer
to
malicious
comments
on
the
research
aspect
here
you
know,
I
think,
like
you
know
the
mobile
use
case.
J
Expensive
operation,
so
maybe
one
one
difficult
aspect
would
be
to
determine
what
type
of
data
you,
especially
for
data
discovery.
C
C
I
think,
let's
take
a
note
for
everybody
and
I'll
when
we
do
the
minutes
that
it
would
be
really
good
for
presentations
to
go
back
to
the
to
the
charter
and
say
what
part
of
the
charter
they
address,
because
if
not,
I
agree
that
we're
going
to
to
touch
so
many
things
doing
plain
data
discovery
and
finding
which
point
of
access
to
connect
to
you
don't
need
any
computation
for
that.
C
You
would
need
computation
if
something
is
needed
in
terms
of
computing,
a
position
or
computing
the
best,
the
best
path
to
the
the
best
access
point,
or
something
like
that,
but
but
the
way
just
to
find
where
the
best
location
is
yeah.
I
don't
think
you
need
computing
and
network
for
that.
So
I
don't
want
to
sound
mean
I
think
the
work
is
is
is
valuable,
but
I
think
it
would
be
yeah
big
step
backwards,
everybody.
C
How
do
we
connect
to
the
to
the
charter
and
maybe
going
back
to
what
mike
was
saying?
Maybe
if
there
is
such
an
interest
in
just
discovering
the
data
independently
of
computing
or
not,
then
maybe
yeah,
maybe
you
need
another.
Maybe
it's
it's
it's
a
buff
or
maybe
maybe
it's
separate
okay,
it's
maybe
it's
something
from
our
from
our
list.
I
I
don't
know
I
I
don't
want
to
throw
anybody
out
by
the
way
yeah,
because
you.
B
B
You
very
much
mike
and
xavier
for
this
and
ike
is
in
the
queue.
So,
let's
give
him
the
screen.
A
And
then
there.
N
Yeah
sure
so
I
will
be
talking
about
the
progress
that
we
are
doing
regarding
our
three
drafts,
so
the
industrial
use
cases
for
the
the
transport
issues
off
and
the
security
and
privacy
enhancements
with
network
computing
systems
and
yeah.
Actually,
since
since
march,
we
haven't
really
updated
the
drafts,
but
we
have
instead
more
or
less
focused
on
working
on
the
topics
and
yeah.
N
What
I
would
like
to
do
today
is
give
a
brief
overview
of
what
we
are
currently
doing
and
but
we're
still
planning
to
do-
and
I
also
also
somewhat
related
to
what
marie
jose
was
saying
in
the
beginning,
like
the
scoping
of
the
of
the
research
group
and
then
also
related
to
what
dirk
presented
can.
B
N
Okay
yeah,
but
that's
the
right
one.
So
I
was
also
thinking
about
how
we
could
perhaps
scope
the
research
group
a
little
bit
more
or
how
we
could
yeah
help
with
that
when
we
are
we're
doing
like
the
research
group
drafts
that
we
were
discussing
earlier
as
well
so
yeah.
The
first
draft
I
did
was
about
invest
industrial
use
cases.
So
here
our
basic
assumption
was
that
we
have
a
lot
of
sensors
and
actuators
in
industrial
systems
and
yeah.
N
Then,
depending
on
the
concrete
setup,
it
might
be
the
case
that
there
are
simply
too
many
sensors
transmitting,
for
example,
into
remote
cloud
and
then
not
all
of
the
data
can
really
be
transmitted,
because
the
uplink
capacity,
for
example,
is
too
small
or
if
we
have
actuators,
which
need
to
have
low
latency
control,
information
and
then
latencies
might
be,
or
physical
latencies
might
be
too
high
to
yeah,
ensure
certain
low
latency
applications,
and
our
idea
in
this
setting
was
then
to
deploy
yeah
certain
functionality
into
the
network
so
that
we
can
actually
enable
these
settings
in
the
industrial
environment.
N
If
you
take
a
look
at
the
image
there
at
the
top,
if
we
simply
place
a
functionality
at
the
top
switch
there
and
don't
have
to
go
up
all
the
way
to
the
remote
cloud
and
then
some
control
information
that
is
can
be
derived
from
sensor,
input
might
be
now
fast
enough
for
the
actuators
there
on
the
left
side
next
slide,
please-
and
this
context
we're
working
on
quite
a
lot
of
different
prototypes,
and
this
is
one
example-
a
use
case
that
we're
doing
so.
N
If
you
take
a
look
at
the
image
on
the
left
side,
you
see
like
this
red
thing
there
that
is
laser
tracker,
which
basically
measures
or
tracks
the
position
of
this
vertical
thingy
there
with
the
checkered
field
in
front
and
it
measures
this.
N
The
position
in
the
form
of
sterical
coordinates,
and
what
is
then
done
is
that
the
the
these
measured
coordinates
are
transmitted
into
the
remote
cloud
actually
and
there
they
are
mapped
into
a
global
coordinate
system
and
then
from
there
on
transmitted
back
to
the
actual
vertical
thingy
in
the
form
of
cartesian,
coordinates
and
yeah
there.
N
Our
mechanical
engineers
tell
us
that
that
is
too
slow
for
their
applications
and
what
we
are
thus
instead
trying
to
do
is
execute
these
functionalities
directly
on
the
networking
hardware,
and
it
basically
boils
down
to
the
computations
that
you
can
see
on
bottom
right.
N
So
a
couple
of
trigonometric
functions
and
some
multiplications,
but
but
this
is
in
fact
already
too
complex
for
most
networking
hardware,
which
is
why
we
are
now
currently
looking
at
different
ways
how
we
can
express
that
or
yeah
yeah
how
we
can
express
that
on
networking
hardware
and
there
we
are
currently
in
the
in
the
evaluation
phase
and
just
planning
to
to
write
a
paper
about
that
as
well.
N
Yeah
next
slide,
please
yeah,
then
the
next
start
that
we
then
did
was
on
the
transport
issues.
So
in
general
we
have
that
applications
build
up
a
dedicated,
end-to-end
transport
connection
when
they
want
to
exchange
data,
and
they
then
especially
assume
that
the
network
underneath
is
unreliable
and
doesn't
really
touch
the
the
content
of
the
packet.
That's
going
through
the
network.
N
We
with
coin
now,
however,
somewhat
break
that.
So
I
think
maurice
jose
also
said
that
in
the
beginning,
as
well,
because
now
some
applications
or
as
I
showed
on
the
on
the
previous
slide-
we
intentionally
tried
to
fiddle
with
the
with
the
content
of
the
packets
and
thus
somewhat
introduce
or
yeah
somewhat
introduced,
the
need
to
a
notion
for
transport
sensitivity.
Also
in
the
middle
notes
there
and
in
this
contact.
We've
then
raised
a
couple
of
issues
in
our
draft
and
yeah.
N
Now
we
we've
just
started
to
do
a
first
prototypical
implementation
for
a
transport
which
is
able
to
support
and
understand
changes
that
are
happening
in
the
network
and
yeah.
So
we've
actually
really
just
started
like
a
month
ago.
I
think,
and
what
we
are
they're
currently
trying
to
or
what
we've
chosen.
N
Therefore
beginning
is
a
ipv6
as
in
as
the
addressing
schemes,
so
that
we
can
then
on
one
hand,
yeah
set
distinct
ip
addresses
for
each
of
the
functions
that
we
have
in
the
network
and
then
can
make
use
of
segment
routing
to
steer
the
traffic
so
that
all
of
the
functions
that
we
want
to
have
get
executed.
N
And
then
yeah
we've
decided
not
to
reinvent
the
wheel,
but
we've
instead
opted
to
use
an
existing
transport
mechanism
and
there
we've
yeah
thought
about
using
sctp,
because
it
is
a
datagram
based
protocol
so
that
we
can
make
sure
that
yeah,
the
packets
that
we
have
in
the
network
are
self-contained
and
additionally,
sctp
offers
the
nice
feature
of
yeah
that
the
messages
are
made
of
chunks
and
there
is
are
quite
a
couple
of
or
quite
a
lot
of,
so-called
types
that
are
still
reserved
for
iatf
use
and
yeah
we
are,
or
what
we
are
currently
intending
to
do
is
that
we
want
to
yeah
experiment
with
assigning
certain
chunk
types
to
certain
functionality
in
the
network,
so
that
then,
based
on
the
chunk
type
that
is
used
by
the
sender
and
the
receipt
by
the
sender
and
then
received
by
the
receiver.
N
Both
participants
then
know
what
the
network
will
do
or
has
done
with
the
packets
that
they
receive
or
send
yeah.
But
that
is,
as
I
said,
still
really
early
stage.
Work
next
slide.
Please.
N
N
So
this
is
again
rooted
in
our
industrial
background,
so
that
we
see
a
lot
of
legacy
devices
which
are
hard
to
update
and
they
commonly
don't
feature
any
security
or
privacy
mechanisms
on
them
and
yeah,
especially
if
you
have
a
critical
infrastructure
there.
N
Your
sensitive
data
and
processing
may
be
leaked
to
the
outside,
and
we
are
now
currently-
or
rather,
my
colleague,
inna
is
currently
into
experimenting
with
the
potential
of
retrofitting
yeah
additional
protection
mechanisms
using
in-network
computing,
and
one
thing
that
she's
there
currently
taking
a
look
at
is
the
manufacturer
usage
description,
which
basically
gives
us
the
opportunity
to
define
what
kind
of
traffic
patterns
or
traffic
behavior
can
be
expected
from
certain
devices
and
then
based
on
rules.
N
We
can
check
whether
these
these
patterns
are
actually
fulfilled
by
the
end
host
or
whether
there
are
some
some
anomalies
in
that
way,
and
there
she's
currently
trying
to
or
evaluating
whether
we
can
enforce
the
even
more
sophisticated
rules
inside
the
network
using
p4
switches.
N
N
Whenever
we
find
something
that
is
valuable
for
the
group,
we
would
like
to
update
our
drafts
accordingly,
and
we
are
still
also
very
much
looking
forward
to
feedback
from
the
research
group,
especially
for
our
rather
new
drafts,
so
especially
the
the
security
and
privacy
draft,
so
yeah,
please
feel
free
to
take
a
look
at
it
and
give
us
feedback,
and
we
will
be
happy
to
to
edit
to
our
drafts
or
update
our
address
accordingly
yeah
and
then
the
last
point
that
goes
rather
in
the
direction,
as
I
said
at
the
beginning,
so
maurice
jose
was
also
talking
about
what
dirk
said.
N
So
I've
thought
a
bit
about
yeah.
If
I
would
be
an
someone
from
the
outside
looking
at
coin,
would
I
directly
know
what
we
are
doing
or
would
I
yeah
just
maybe
also
just
a
glass,
have
a
problem
of
really
understanding
what
we're
doing
and
yeah
in.
N
I've
really
found
dirk's
draft
objects
and
york's
draft
be
helpful
as
a
as
a
basis
because
they
yeah
they
tackle
the
question
of
how
we
understand
coin.
What
kind
of
different
meanings
for
coin
are
there
and
yeah
this?
N
I
would
really
support
if
we
yep
potentially
extend
that
draft
to
to
make
that,
maybe
as
a
basis
for
the
research
group,
so
that
we
can
align
all
the
other
research
around
such
common
definitions,
and
I
would
also
potentially
have
something
where
we
also
include
general
definitions
that
we
want
to
use,
because
I
think
in
a
lot
of
ways,
we
are
often
talking
about
similar
things,
but
using
very
different
words
for
them,
which
obviously
makes
it
more
difficult
to
to
understand
each
other
and
yeah.
This.
N
I
would
appreciate
if
we
could
maybe
think
about
about
that
for
further
drafts,
and
I
think
that
we
leave
the
rest
of
here
due
to
time
constraints.
I
guess,
but
thank
you
and
yeah
if
there
are
anything
or
if
there's
anything,
that
you
would
like
to
discuss,
I'm
happy
to
take
questions
if
time
permits
or
on
the
list
or
via
mail.
C
Actually,
I
I
would
like
to
thank
ike
for
the
last
last
slide,
because
it's
nice
to
see
them
organized
and
obviously
again
the
data
discovery
I
think
is,
is
a
is
something
that
we
need,
maybe
a
little
bit
to
realign,
but
thank
you
for
the
last
presentation.
C
So
are
there
any
questions,
I'm
following
the
discussion
on
the
chat
and
I'm
not
seeing
a
specific
question:
does
anybody
have
a
specific
question.
C
Yes,
stewart
we
can
take
that
to
a
private
discussion.
Okay,
the
last
one
last
one
is,
is
me
and
me
and
eve.
So
this
is
just
us
others
as
well.
Yeah.
C
It's
us
and
us,
and
the
ones
that
are
there,
because
I
don't
think,
roberto
and
edgar
made
it
they're
on
vacation.
So
this
is
something
that
actually
comes
from
the
detroiter
and
has
evolved
not
from
the
top
down,
but
from
the
bottom
up
next
slide.
C
So
obviously,
this
is
actually
something
that
you
eve
said
before
the
network
is
the
data
and
and
I've
been
working
for
quite
a
while
now
on
data-driven
services
and
application,
and
the
fact
that
there's
a
lot
of
delay,
sensitive
and
critical
decision
making.
That
needs
to
be
done
close
to
the
the
location
of
that
data
gathering,
which
is
much
beyond
what
is
usually
done,
where
you
gather
the
data,
you
send
it
on
your
wi-fi
network
and
you
send
it
up
to
the
cloud,
and
things
are
all
done
there.
C
C
C
And
so
this
cloud
edge
continuum
and
there's
this
powerful
hardware,
the
tofino
switches
and
a
lot
of
software
abstractions
that
are
now
available
in
iot
and
edge
computing,
and
we
started
mixing
all
these
things
together.
You
know
the
tofino,
which
does
a
lot
of
the
match
action.
C
That
is
very,
very
interesting
for
data
filtering
and
even
I
have
had
discussions
with
the
intel
people
who
are
part
of
the
final
switch,
where
we
also
wanted
them
to
start
doing,
match
action
on
metadata
and
not
just
on
header
addresses
so
that
we
could
have
better
edge
computation
systems.
So
next
slide.
C
C
Okay,
there's
actually
a
lot
of
things
in
there,
including
there's,
there's
pub
sub
that
actually
can
drive,
and
this
is
why
we
want
to
start
doing
things
on
the
metadata,
pub
sub
or
even
icn,
based
applications
that
want
to
identify
the
packets
that
are
interesting,
filtering,
there's,
obviously,
data
acquisition
that
can
be
in
multiple
locations
and
there's
actually
cloud
and
local
processing
for
that
data.
C
And
again
you
need
to
have
some
of
that
processing
done
not
on
the
network,
but
in
the
network,
for
it
to
be
very,
very
efficient
and
being
able
to
support
these
critical
applications.
I
will
tell
you
that
the
first,
the
first
time
that
I
thought
about
this
was
actually
in
a
vertical
agriculture
application,
where
there's
a
lot
of
local
decisions
that
are
made
based
on
the
type
of
images
that
you
have
and
you
make
decisions
based
on
what
you
see
and
what
you
see
has
to
be
determined.
C
Then
the
decision
has
to
be
determined
and
by
local
algorithms,
but
then
also
the
results
have
to
be
filtered
depending
on
where
they're
going.
So
if
it's
say
something
that
can
be
resolved
locally,
it
is
sent
to
a
local
processor.
If
it's
something
that
is
much
more
related
to
overall
management
of
the
system,
then
it's
sent
to
the
cloud.
C
So
the
idea
of
having
this
this
common
data
layer
is
to
say,
okay,
what
are
the
elements
in
terms
of
hardware
and
in
terms
of
interfaces
that
we
could
define
that
would
allow
these
decisions
to
be
taken
both
locally
and
in
the
cloud
and
to
have
the
processing
done
both
locally
and
in
the
cloud
in,
in
collaboration
with
networking
in
the
middle,
to
allow
the
communications
between
different
the
different
processors
and
also
with
the
different.
You
know,
I
call
it
data
networking
mechanisms
which
is
related
to
icn
by
the
way
next
slide.
C
So
I
I'm
running
out
of
time
so
there's
actually
data
layer
functions.
I
mentioned
the
filtering
the
pub
sub
that
the
actually
there
is
a
service
data
function,
discovery
there's
orchestration,
there's
cash
management
which
works
with
this,
the
size
of
the
cash
and
the
staleness.
C
C
What's
inside
the
network
computing
devices
and
that's
what
we
want
to
go
actually
more
into
so
anyway,
so
this
is
all
the
functionalities,
there's
a
lot
of
other
people
who
are
looking
at
this,
and
we
want
to
make
sure
this
is
very
early
work
by
the
way
and
there's
a
lot
of
other
people
who
are
looking
at
similar
things.
We
want
to
also
allow-
and
this
is
actually
for
yorg.
We
want
to
be
able
to
allow
in
that
layer
to
do
pipelining
of
functionalities
and
do
pipelining
of
decision
making.
C
So
there's
a
lot
of
ai
and
ml
in
there,
but
I
take
the
point
that
it
does
yeah
it.
If
you
take
it
to
the
simple
layer,
it
could
look
like
just
doing
network
computing
devices,
but
we
want
also
to
do,
I
would
say,
computing
device
in
the
network
and
not
on
it
next
slide.
I
guess
that's
going
to
be
a
theme
now
the
difference
between
in
and
on
no
problem
york
and
okay.
So
we
wanted.
C
This
is
actually
very
early
work
and-
and
I
think
one
reason
we
wanted
to
expose-
it
is
because
it's
related
to
automotive,
as
cars
are
becoming
way
more.
The
car
is
the
network.
Now
I
mentioned
agriculture
to
4.0,
there's
a
lot
of
next
generation
iot,
which
includes
a
lot
of
intelligent
controllers
that
do
local
functionality.
Also,
we
want
to
look
at
solutions
that
we've
that
we've
looked
at
and
maybe
prepare
a
draft.
It's
related
to
work,
that's
been
done
in
ericsson.
C
So
that's
why
we
have
authors
from
erickson
and
because
there
is
an
element
of
discovery
of
capabilities
of
functionalities
and
of
data.
That's
why
we
feel
it's
related
also
to
the
data
discovery.
So
this
is
very
early,
but
I
know
that
there's
other
groups
who
are
looking
at
adding
functionality
in
terms
of
match
action
and
filtering
using
header
information
and
more
than
header
information.
C
You
know
like
network
layer
functionality
that
is
based
on
on
what
I
call
match
action,
so
we
wanted
to
expose
the
work,
although
it's
very
early
to
make
sure
that
you
know,
people
know
that
this
is
actually
and
this
is
actually
within
our
our
charter.
It's
actually
the
part
of
the
charter
that
we
need
to
work
on,
and
so
we
wanted
to
expose
it.
So
thank
you
for
listening
to
something
that's
actually
still
very
much
in
in
the
becoming
instead
of
in
the
coming
I'm
following
the
discussion.
C
It's
actually
there's
a
lot
of
coded
computing
and
homomorphic
encryption
and
all
kinds
of
network
coding.
I
think
again
we
can
take
that
offline.
If
you
want
to
conclude.
A
B
C
C
We
want
to
again
start
adopting
some
drafts,
and
that
was
a
previous
one
and
sorry
yeah
and,
and
I
think
yeah
I
think
we
had.
We
had
a
few
candidates.
I
think
right
now
the
top
candidates
are
dirks
and
ikeys.
C
We'll
push
that
to
the
to
the
the
list
to
ask.
You
know
the
the.
C
I'm
actually
things
at
the
same
time,
so
I'll
start
I'll
start
focusing
on
what
I'm
saying
and
so
yeah.
So
we
want
to
start
adopting
and
those
two
are
the
top
ones
we'll
send
it
to
the
list.
C
There's
also
potentially
new
research
topics,
because
there
is
a
lot
of
work
being
done
into
what
I
call
in
the
network
and
not
on
the
network,
and
maybe
we
wanna
have
some
invited
talks.
I'd
like
to
reinv
invite
the
people
from
cambridge
university,
for
example,
okay,
so
there's
a
security
discussion
on
the
side
that
I
will
ignore
for
the
moment
and
and
then
yeah.
So
I'd
like
to
reinvite
the
people
from
cambridge
noah,
zilberman
and
company
and
there's
other
work.
C
That's
been
done
out
of
california
that
I
think
would
be
in
stanford
and
berkeley.
That
could
be
quite
of
interesting,
so
I
will
probably-
and
we
will
chase
that-
but.
B
I
would
invite
the
folks
who
are
having
a
security
conversation
to
maybe
pipe
up.
A
C
The
people
who
are
doing
the
security
discussion
stuart
dirk
and
dave,
would
you
like
to
you're
the
current
invited
talk?
Would
you
like
to
say
something.
C
C
Okay,
so
you
keep
dropping
so
I'll
read
what
you
sent
dave
said
just
a
profile
monkey
wrench.
There
are
three
possible
approaches
to
security
figure
out
how
to
distribute
the
key
to
all
the
possible
intermediaries,
while
controlling
transitive,
trust
and
provenance,
depending
on
fhe,
provide
a
shim
layer
to
migrate
the
things
you
want
to
operate
and
on
and
to
be
outside
of
the
encryption
envelope.
I
think
this
has
to
go
with
like
when
you
start
putting
computing
in
a
network,
and
the
packets
are
coming
are
very
much
encrypted
and
yes,
lucia.
C
This
becomes
a
question
of
trust
establishment
and
I
think
way
back
when
you
can
talk
lucia,
but
I
think
way
back
when
we
had
talked
about
when
you
start
putting
computing
in
the
network.
You
start
needing
trusted
entities.
C
L
To
me,
actually,
I
think
the
fundamental
question
is
really
trust:
the
establishment
different
from
what
we
do
today
under
internet.
We
somehow
ignored
how
the
trust
gets
established.
We
think
that
there
are
the
cas
out
there.
They
take
care
of
that
question.
I
think
for
this
edge
computing
thing.
I
think
we
have
to
take
a
fresh
start
after
that
question,
because
if
we
think
like
that
there
will
be
lots,
lots
and
lots
of
edge
devices.
L
Where
do
we
transfer
them?
They
don't
have,
I'm
pretty
sure
nobody
gonna
pay
them
to
get
off
the
commercial
or
ca
certificate.
Now,
do
you
yeah
trust
the
certificate
you
get
it
from?
Let's
encrypt,
so
the
good
questions
are
there.
I
think
after
we
have
trust
the
establishment
there.
There
are
existing
proposed
solutions.
How
you
can
manage
answer
all
the
other
questions.
I
think
that
they
provide
like
a
key
distribution
in
that
automatic
way,
rather
than
manually
configure
or
something.
But
the
first
thing
I
think,
is
the
most
difficult
and
that.
B
And
actually
I
would
I
I
would
like
to
encourage
folks,
given
that
we
have
been
running
in
tandem
with
the
and
the
applied
network
research
workshop,
that
you
know
to
kind
of
think
of
these
irtf
working
groups.
In
that
vein,
in
that
we
really
were
given
the
directive
to
bolster
you
know
these
discussions
around
security,
privacy
and
trust,
and
if
you
hear
or
see
you
know
a
presentation
or
a
paper
about
computation
in
the
network
that
has
a
an
interesting,
reasonable,
provocative.
B
You
know
name
your
adjective
description
about
security,
privacy
or
trust.
Let's
try
to
invite
those
speakers
here.
Let's
make
be
more
active
in
our
pursuit
of
those
who
are
working
in
that
arena,
so
that
would
that
was
kind
of
my
takeaway
of
the
what's
lacking
here.
Okay,.
C
And
and
since
we
have
dirk
on
the
on
on
the
line,
I
think
there's
also
some
of
the
discussion
that
happened
inside
the
they
distributed,
the
den
dinar
g
and
you
know,
with
blockchain
and
ethereum
spaces.
C
That
is
also
quite
quite
relevant
and
something
actually
what
I
did
not
present
in
the
data
layer
is
that
we're
also
looking
at
that
to
at
least
created
the
the
network
of
trusted
entities
that
are
collaborating
together.
So
this
is
more
stuff.
Thank
you.
Alicia,
there's,
okay,
dirk,
go.
C
H
F
Thanks
and
yeah,
I
just
want
to
say,
I'm
not
sure
it's
a
good
idea
to
me
to
mix
all
these
topics
together.
So
I
think
from
the
discussion
today
also
in
the
chat
we
have,
I
think,
learned
that
there's
quite
a
bit
of
you
know
scoping
and
clarification
to
do
and
so
like
on
network
in
network
computing,
these
kind
of
things
this
taxonomy
that
we
talked
about.
So,
let's
not
you,
know,
broaden
this
even
further.
I
think
that
that
could
be
challenging.
F
C
You
know
we
have
a
lot
of
related
research
groups,
we
have
to
think
to
things
which
is
related
to
us.
We
has
obviously
distributed
networking
which
is
related
to
us.
We
have
icn
that
is
related
to
us
and
I
think
we
should
not
be
yeah,
be
afraid
of
either
sending
work
out
or
actually
doing
joint
joint
work
so
that
we
do
not
duplicate
what
is
being
done
elsewhere.
C
So
now
we
have
two
minutes
before
we
actually
shut
down,
so
we
I
have
the
impression
that
108
will
be
virtual.
Do
we
want
to
do
a
hackathon?
I
don't
know.
C
The
experience
from
last
week
was
halfway.
Good,
actually
gather
gathered
that
town
was
kind
of
fun
for
it,
but
I
don't
think
we
made
a
lot
of
progress
by
the
way
in
the
one
that
I
was
in,
so
we
can
talk
about
that
at
the
at
the
interim.
Now
we
have
one
minute.
C
Is
really
sorry
that
he
couldn't
join?
He
had
really
computer
problems.
I
thank
everyone.
You
know
at
one
point
we
were
78
people
which
online
is
amazing
and
thank
you
very
much
for
everybody
for
for
participating.
Thank
you
extremely
more,
particularly
to
all
the
presenters
and
the
people.
Who've
done
the
great
work.