►
From YouTube: IETF111-COINRG-20210726-2300
Description
COINRG meeting session at IETF111
2021/07/26 2300
https://datatracker.ietf.org/meeting/111/proceedings/
A
Foreign
okay,
it's
four
o'clock
good
day
good
morning,
good
evening
very
late
into
the
evening,
and
thank
you
to
all
of
you
who
are
in
unusual
time
zones
and
are
up
well
beyond
when
you
are
normally
up.
You
are
attending
the
coin
computing
in
the
network,
research
group,
and
we
welcome
you
on
behalf
of
jeffrey
and
marie
jose
and
myself
otherwise
referred
to
as
gem
and
you
are
participating
in
the
ietf.
A
So
there
are
for
those
of
you
who
are
long-time
ietfers
know
that
there's
a
note
well
about
the
policies
that
affect
various
topics
of
discussion
and
conduct
and
patents,
and
we
will
be,
of
course
participating
in
audio
and
video,
and
so
much
of
what
happens
here
is
of
public
record.
A
Should
you
choose
to
follow
these
links
from
our
slides
and,
of
course,
the
session
has
been
the
recording
started
several
minutes
before
the
session,
and
so
please
note
that,
of
course,
this
meeting
is
part
of
the
sister
group,
the
irtf,
the
internet,
research
task
force,
which
really
focuses
on
longer
term
research
goals,
and
it
is
primarily
not
a
standards,
development
organization,
but
we
are
conducting
research
and
trying
to
foster
research
and
understand
the
gaps
that
may
or
may
not
lead
to
ietf
discussion
and
uptake.
A
But
for
those
of
you
who
haven't
participated
before
there's,
certainly
a
primer
for
participating
in
the
irtf
for
ietf
participants
that
you
can
have
a
look
at
you're
here,
so
you've
you've
successfully
maneuvered
to
being
on
this
call.
A
Despite
what
that
link
says
at
the
very
top
of
this
page,
please,
if
you
are
not
speaking,
please
be
muted
if
you
are
presenting,
feel
free
if
you're
comfortable
and
have
the
bandwidth
to
display
video,
although
it's
not
required
and
of
course
we
would
love
for
folks
to
participate
in
the
collaborative
note-taking
tool,
which
is
actually
not
only
specified
here
as
a
url.
But
it's
built
into
this
tool
the
meat
echo
tool-
I
personally
am
not
participating
in
the
jabber,
but
again
everything
has
been
integrated
into
this
tool.
A
So
we
can
see
your
questions
and
we
can
see
the
chat
window
and
we
will
try
to
monitor
those
things
as
the
session
goes
along
if
you're
not
already
on
the
mailing
list.
A
A
B
B
Sorry
I
just
wake
up,
so
this
is
today's
agenda,
so,
as
we
announced
in
our
minister
mainly
we'll
focus
on
the
use
cases,
but
we
will
start
from
two
coin
related
activities.
One
year
the
piccolo
project
hope
phillips
has
joined
us
and
the
second
is
the
tax
towards
some
seminar.
B
The
culture
will
introduce
this.
This
is
related
to
computer
first
networking,
and
then
we
have
three
presentations
about
the
use
case.
First,
one
is
from
that
chosen
based
on
their
project
between
btw
and
huawei
and
cambridge,
and
they
will
also
they
will
introduce
their
insights
or
findings
in
use
cases
and
then
the
second
is
a.
B
Introduction,
based
on
draft.
B
Mobile
virtual
networks,
and
then
we
have
careers
to
introduce
his
paper
in
the
infocomm
this
year
and
after
that,
we'll
have
a
short
implementation
to
update
the
current
draft
use
case
draft
from
ike,
then
several
topics
after
this
meeting.
A
It
I
mean
if
you
would
like
to
say
something
about
the
documents
or
maurice
jose.
C
We
have
two
research
group
documents,
the
use
cases
and
the
the
directions
which
is
expired,
but
I
know
from
dirk
that
it's
going
to
be
updated.
C
Those
are
the
two
ones
that
are
very
high
in
terms
of
where
they
are
in
the
stack
and
they've
been
there
for
quite
a
long
time,
their
rg
documents-
and
we
intend-
probably
in
the
next
months
to
maybe
push
them
to
last
call
and-
and
you
know,
push
them
into
the
open.
The
stack
there's
a
new
draft,
which
is
the
the
case
from
the
case
study
from
xavier
and
then
there's
a
bunch
of
other
documents.
Some
of
them
are
expired.
Some
of
them
need
care.
C
So
I
would
like
everybody
to
remember
that
they
are,
they
need,
probably
to
be
managed
and
we
are
going
to
start.
You
know
going
through
them
and
contact
the
authors
to
make
sure
that
they,
you
know,
we
know
which
ones
are
still
ongoing
and
which
ones
will
be
more
or
less
abandoned.
C
Oh
you're,
already
in
presentation
we
had.
I
okay,
I
had
a
slide
on
some
of
the
milestones,
so
maybe
it's
going
to
be
presented
later.
We
have
yeah
well
yeah.
I
would
like
them.
Okay,
we
we've
we've
basically
addressed
all
our
milestones
with,
I
would
say
if
I
want
to
see
the
the
glass
half
full,
I
will
say
we
were
doing
really
well.
C
C
You
know:
we've
been
there
for
for
two
years
now
and
I'd
like
us
to
probably-
and
we
would
like
us,
the
gems
I
would
like
to
to
see
if
we
need
new
ones
which
ones
have
been
achieved
and
which
ones
need
to
be
updated
so
again
without
further.
You
know
introductions
from
us.
We
would
like
to
go
into
the
presentation,
the
first
one
because
of
time.
Well,
actually,
everybody
is
in
the
time
that's
presenting,
I
think,
is
in
a
bad
time
zone.
C
So
the
first
one
is
actually
an
update
on
the
piccolo
project,
which
is
looks
at
a
lot
of
computing
and
networking,
and
the
presentation
will
be
done
both
by
philip
and
by
peter,
and
I
will
let
them
talk
on
their
very
interesting
use.
Cases.
D
C
F
So
yeah,
thank
you
very
much.
So
we're
going
to
talk
about
a
couple
of.
D
Use
cases
and
proof
of
concepts
that
we're
working
on
in
in
the
piccolo
project,
so
peer's
going
to
talk
about
smart
factory,
a
smart
factory
use
case
shortly,
but
first
of
all,
I'm
going
to
talk
about
one
that
we've
calling
risk
monitoring
so
jag
and
dennis
and
alex
are
the
main
people
who've
been
doing
the
work,
but
I
was
the
one
who
volunteered
to
try
and
stay
awake.
D
Okay,
so
yeah
onto
the
second
slide.
Thank
you.
So
this
is
about
automotive
risk
monitoring.
So
the
idea
of
the
use
case
is
that
it's
a
real-time
assessment
of
situational
risk
based
on
the
knowledge
of
a
vehicle
and
the
behavior
of
the
of
the
occupants
and
about
what's
happening
around
outside
the
vehicle
and
maybe
about
other
things
like
like
the
weather
or
the
road
conditions.
D
D
So
you
know
the
the
combination
of
information
from
from
what's
from
vehicle-based
information
from
other
vehicles,
from
road
conditions,
from
the
driver,
behavior
and
so
on,
so
that
that
information
might
be
might
be
displayed
to
the
driver,
as
you
know,
as
a
warning
or
it
might
be,
you
know
if
it's
a
self-driving
car
in
the
future,
it
might
be
an
alert
to
the
to
the
human
in
it
or
you
could
imagine
that
it
might
be
information
to
say
a
fleet
manager
or
in
some
kind
of
aggregated
form,
maybe
to
people
like
traffic
planners.
D
D
So
next
slide,
please,
okay,
so
this
is
how
you
might
try
and
do
it
today
or
kind
of
a
default
way.
You
might
do
it
today.
So
there's
video
processing,
which
is
separate
from
the
vehicle
data
systems
both
going
into
a
cloud-based
analytics
engine.
The
digital
processing
will
be
done
either
on
the
camera
or,
if
there's
a
you
know,
a
certain
number
of
several
cameras
in
in
the
car,
then
you
know
the
vp
might
be
that
visual
processing
might
be
combined
across
across
the
cameras.
D
So
this
is
our
target
architecture
from
what
we're
we're
aiming
at
in
in
the
project.
So
I
think
that
the
key
message
here
is
one
about
sort
of
flexibility,
so
flexibility
in
different
sorts
of
ways.
D
So
you
know
one
way
shown
there
is
that
we
want
to
explore
doing
some
of
the
processing
in
in
the
edge
network
in
the
sort
of
fog
network
at
the
you
know
in
the
operator
cloud
near
near
where
the
car
is
or
the
vehicle
is,
rather
than
doing
it
all
within
the
vehicle,
so
that
might
enable
you
to
have
a
simpler,
cheaper
edge,
node
or
cameras
in
the
vehicle.
D
So
another
way
you
might
have
flexibility
is
so
that
you
can
update
and
alter
the
the
algorithms
some
machine
learning
algorithms
easier.
So
you
know
so,
and
rather
than
having
often
this
the
software
and
these
systems
is
tightly
coupled
to
particular
hardware
in
the
vehicle,
so
to
enable
more
kind
of
flexibility
there,
you
might
also
be
able
to
have
different
kinds
of
algorithms
there,
depending
on
what
sort
of
vehicle
it
is.
D
You
have
different
algorithms,
you
know
trying
to
detect,
behaved
passengers,
or
maybe
you
want
information
about
how
many
people
are
wearing,
masks
or
not
wearing
masks,
and
that
might
be
a
current
kind
of
example.
D
Okay,
I
think
yeah
gone
next
slide
please.
D
So
this
is
the
design
of
what
the
proof
of
concept
looks
like.
So
we
basically
the
stage
we're
at
is
the
components
for
different
bits
have
been
chosen,
and
it's
now
in
bench,
testing
or
lab
testing
stage
so
late
this
year
be
moving
into
the
into
the
vehicle.
D
So
the
the
vehicle
data
side,
which
is
the
part
that
bosch,
is
leading
so
there
it's
a
cook's,
a
dongle
which
is
an
open
source
software
hardware
project
which
plugs
into
the
onboard
diagnostics
port
in
the
vehicle
and
extracts
information
from
the
can
bus
of
the
vehicle.
D
That's
then
fed
to
this
in
the
in
the
the
network.
Just
about
read
the
logo
that
says
genevieve.
So
that's
an
alliance
for
for
connected
vehicles,
platform,
linux
based
for
basically
analyzing,
do
the
doing
the
analytics.
D
So
then,
the
part
in
the
in
the
network
in
the
edge
network,
so
fluentic,
is
a
small
company
in
the
uk
working
there
on
secure
containers
building
on
top
of
hardware
security.
So
it's
intel
sgx,
it's
a
secure
containers
framework
called
scone.
D
That's
on
top
of
that.
That
does
things
like
resource
allocation
and
reporting
from
containers
and
attestation
and
then
sensing
feeling,
which
is
a
small
another
small
company
in
the
uk,
who
have
various
vision,
processing
products
today
so
they're
looking
at
changing.
That's
that
side
of
things
and
adapting
which
elements
are
done
in
the
vehicle
and
watching
the
network.
D
So
next
slide,
please,
which
is
the
last
one
so
just
to
touch
on
some
of
the
topics
that
are
under
development
and
research
questions.
So
these
are
really
under
the
general
heading
of
of
control
of
the
of
the
nodes
in
the
network.
D
So
that's
control
of
things
like
resource
control,
being
able
to
flex
that
and
adapt
adapt,
the
algorithms
there
or
the
frame
rate,
the
cameras
or
whatever.
According
to
resources,
things
like
about
delivery
of
functions
to
the
nodes
and
what
sort
of
functions
they
might
be
distribution
of
the
functions
so
things
under
the
sort
of
general
orchestration
heading
thinking
about
what
control
or
knowledge
the
application
might
have.
D
What
the
network's
up
to
and
kind
of
flip
side.
Of
that,
what
additional
functions
an
edge
network
might
want
to
offer.
So
maybe
privacy
filtering
or
notably,
scope,
machine
learning,
algorithms
things
under
general
security
and
privacy
heading
which
have
touched
on
and
then
bearing
in
mind.
This
is
a
vehicle,
so
just
the
general
context
of
mobility
and
handle
mobility
and
how
you
handle
the
problems
associated
with
that.
D
So
that
was
all
I
wanted
to
just
quickly
run
through
on
that
particular
use
case.
Just
one
slide
at
the
end
about
the
project.
Yes,
so
that's
it
for
me
and-
and
I
can
either
ask
questions
now
or
quickly
or
we
can
peer,
can
talk
and
then
have
the
questions
at
the
end.
But
I'll
see
kiretti
you're
you're
in
there
hey.
Can
you.
G
G
G
You
expect
that
you
know
your
networking
works
because
the
wired
network
inside
the
car
and
when
it's
in
in
the
network,
your
connectivity
may
either
actually
go
down
or
experience
fluctuation,
so
in
in
designing
the
balance
between
what
happens
in
the
edge
node
and
the
picollo
node.
I
think
the
the
safety
of
the
car
from
the
point
of
view
of
there's
a
function
that
I
really
needed
to
be
secure.
G
D
You
have
to
be
make
sure
that
the
the
things
you're
doing
in
the
network
are
not
safety
critical
or
with
if
they
are
you've,
got
to
work
out
how
to
fall
back
quick
enough
and
that's
probably
difficult
I'd,
say
one
when
we're
with
we're
exploring
the
kind
of
general
I
mean
this
is
an
example
use
case
and
we're
really
using
it
as
a
way
of
exploring
the
issues
about
sort
of
just
distribution
and
orchestration,
rather
than
you
know,
sort
of
nailing
the
colors
to
the
mosque
that
this
is
a
use
case.
D
G
Yes,
that
sounds
good.
I
think
that,
especially
in
the
case
of
a
car
where
you
want
real-time
actions
in
case
you
know,
the
sensing
system
sees
someone
crossing
the
road
unexpectedly
or
whatever,
but
I'm
sure
that
I
mean.
D
D
Mean
I
don't
know,
although
I
mentioned
self-driving
car,
that's
the
sort
of
you
know
an
example
of
how
it
might
be
used
in
the
future.
Actually
in
the
end
of
concept,
human
driving.
This
is
just
well.
G
Even
in
assisted
driving,
you
know
there
are
these
cars
that
will
detect
that
you
know
the
car
in
front
of
you
is
closer
than
you'd
like
it
to
be:
it's
not
driving,
but
it
it's
giving
you
that
bp
beep,
that
hey
you
need
to
slam
the
brakes
on.
A
Okay,
we'll
be
interested
to
hear
how
that
some
of
the
issues
of
trust
and
security
play
out
and
we'd
welcome
you
coming
back
to
tell
us
what
you
know
how
you
evolve
that
I
think
we're
into
the
second
thank.
A
C
Was
going
to
say,
we
now
have
peter
you'll
have
to
be
a
bit
faster,
so
we
maintain
time.
H
H
C
H
Okay,
I'll
rush
you
through
the
through
the
smartphone
factory's
use
case.
First,
I
have
to
introduce
about
what
kind
of
factories
we
are
actually
dealing
with
that,
so
we
will
be
actually
getting
access
to
a
real
factory
and
we'll
be
controlling
a
real
factory
producing
real
things
named
elise.
H
Electrical
motors
are
actually
electric
pumps
for
power
brakes
like
that
that
are
used
in
cars
in
modern
cars
and
there's
one
one
of
the
few
pictures
I
was
able
to
actually
get
from
the
factory,
so
that's
kind
of
like
you
see
the
status
where
how
they
are
kind
of
like
coming
in
and
out
that's
from
a
test
drive,
but
we'll
see
more
of
that.
H
So
first,
let
me
introduce
you
what's
what
we
want
to
achieve
here.
So
normally
factories
are
built
kind
of
like
on
the
drawing
board
and
then,
basically,
everything
is
programmed
by
hand
all
the
movement
of
all
the
pallets
and
all
the
processes,
and
everything
is
kind
of
like
handcrafted
in
plc
programs,
and
that
is
not
very
flexible
and
is
also
wasting
a
lot
of
effort.
H
So
the
the
goal
we
want
to
achieve
here
is
plug
and
produce.
So
you
actually
stick
together
your
conveyor
belt
system,
and
then
it
self
learns
its
topology
and
routes
pallets
in
a
smart
way
in
an
optimized
material
flow,
basically,
and
that
is
built
into
the
conveyor
belt
system
and
and
not
using
an
external
server.
So
that's
how
such
a
assembly
line
looks
and
we'll
just
like
skim
over
it.
It
goes
in
circles.
H
We
are
actually
distributing
in
the
in
the
use
case,
the
pellets
between
these
spooling
machines.
So
these
are
the
machines
that
actually
spool
electric
motors
each
machine.
Each
machine
can
spool
six
motors
and
sometimes
not
all
six
work,
so
we
have
to
be
flexible
in
distributing
the
pellets
and
and
also
different
kinds
of
motors.
Here
I
will
so
that's
actually
a
predecessor
system
where
we
have
little
nodes
connected.
H
I
just
want
to
show
you
like
how
this
movement
looks
like
you
have
these
pellets
and
you
can
go
through
corners
and
they
are
taken
by
these
conveyor
belts.
I
cut
this
short
because
of
the
time
constraints,
even
it's
like,
maybe
so,
let's
head
back
to
the
conveyor
belt
system,
so
that's
the
the
floor.
H
Plan
of
the
factory
was
just
the
conveyor
belt
and
everything
else
left
away,
and
so
again
we
are
focusing
on
this
spooling
machine
things
and
the
blue
dots
that
you
see
up
here
are
the
embedded
nodes
that
we
attach
to
the
to
the
conveyable
system,
which
are
later
sold
together
with
a
conveyor
belt
system,
or
that's
the
plan
actually
to
make
it
to
a
product
of
of
bosch
rexroad.
Who
makes
these
conveyor
belt
systems.
H
And
so,
if
you
take
away
the
the
grapple
system,
we
have
a
kind
of
like
mesh
network
architecture.
So
you
can
daisy
change
these
nodes
and
some
are
connected
to
a
switch,
and
so
that's
kind
of
like
a
typical
network
topology
we
would
have
in
practice,
and
people
don't
want
to
depend
on
the
topology
and
the
the
system
that
we
implement
is
using
message,
parsing
kind
of
like
an
active-based
system,
because
we
are
using
along
programming,
language
and
allowing
systems.
So
we
are
running
along
on
all
these
little
nodes.
H
So
we
have
processes
and
we
have
these
nodes,
which
are
the
actual
compute
nodes
and
we
have
to
connect
it
to
a
network
and
the
process
send
messages
and
that's
basically,
all
we
care
about.
We
have
everything
we
can
build
everything
with
processes
and
messages,
and
the
big
question
is:
if
you
have
a
complicated
system
and
a
complicated
topology,
how
do
we
automatically
map
these
processes
to
nodes,
so
everything
works
smoothly
within
necessary
constraints?
That's
our
core
question,
and
here
we
have
basically
two
modes
of
operation.
H
So
only
occasional
recalculation
here
when
preconditions
change
like
a
node
fails
and
we
need
to
kind
of
like
remap
and
on
the
other
hand,
we
have
a
very
dynamic
mapping
where
we
mainly
want
to
use
the
compute
power
of
all
the
hundreds
of
embedded
nodes
along
the
conveyor
belt
and
and
and
run
complicated,
optimization
algorithms
like
heuristic,
search,
optim
planning,
algorithms
and
use
all
these
compute
nodes.
H
So
we
don't
need
to
run
a
big
server
for
the
for
the
optimization
processes
and
and
here
processes
can
migrate
and
spawn
regularly
and
and
basically
this
is
much
more
dynamic
and
we
want
to
do
this
with
local
knowledge.
H
So
if
you
look
at
these
at
our
network
from
before,
basically
these
the
the
static
mapping
is
really
is
relatively
local.
So
this
is
for
the
control
loops
that
actually
control
the
the
the
neighboring
things
make
sure
that
nothing
collides
on
the
on
the
lowest
level
and
the
dynamic
mapping
is
basically
the
whole
network.
We
want
to
use
all
the
nodes.
H
So
maybe
just
a
short
introduction,
but
without
going
into
details,
so
we
I
mentioned
already
that
we
implement
all
this
in
erlang
with
a
built-in
distribution
functionality,
and
we
run
these
directly
on
hardware,
the
lnvms.
H
So
we
can
actually
have
this
in
small
embedded
systems,
but
I'll
skip
my
skin.
The
details
here,
the
our
main
research
questions
here
are
that
can
we
can
we
actually
can
the
orchestrator
that
that
maps
all
these
computes
to
these
nodes
computation
these
nodes
be
distributed
in
the
same
in
the
same
sense
as
the
computation?
H
Basically,
we
need
to
run
a
distributed
online
planning,
algorithms
or
an
algorithm
that
re-plans
constantly
basically
contains
a
digital,
twin
and
all
kinds
of
heuristic
search
things,
and-
and
is
this
even
implementable
in
on
such
an
iot
based
system,
because
that's
kind
of
like
a
new
thing
that
I
at
least
I
haven't
seen
before,
and
and
how
can
we
extend
this
into
a
generic
solution
for
all
kinds
of
similar
network
computing
in
network
computing
and
with
ethernet
tsn,
which
is
and
then
also
that
net?
H
If
you
have
hard
real-time
networking
capabilities
and
path
reservation,
this
would
be
a
possible
extension
that
we
actually
can
do
hard.
Real-Time
things
in
in
these
networks.
H
Yeah,
I'm
done
it
was
my
last
slide.
Oh.
C
It
was
your
last
light.
Okay,
great,
we
have
time
for
maybe
one
question.
A
Will
ask
myself
to
ask
my
question.
I
was
wondering
if
you
could
give
us
a
sense
of
what
kinds
of
hard
real-time
constraints
are
part
of
the
system
in
terms
of
the
magnitude
of
the
maybe
the
control,
loops
or
or
other
timing
issues.
For
this
in
order
to
explain.
H
So
that
the
current
system,
that
that
only
controls
the
conveyor
belts
like
like
you,
have
seen
them
in
the
video
there,
we
have
kind
of
like
tens
of
milliseconds
or
five
milliseconds
and
soft
real
time
would
be
sufficient
because
nothing
bad
happens
if
you're
slow.
It's
just
like
the
pellet,
doesn't
move
immediately.
H
But
if
you
want
to
kind
of
like
do,
combined
access,
basically
like
you
want
to.
There
is
conveyor
belt
systems
that
that
can
precisely
control
the
position
of
the
pilot,
and
you
want
to
kind
of
like
synchronize
this
with
a
robot
arm
there.
You
need
to
go
to
sub
millisecond
hard,
real-time
requirements,
and
that
would
be
an
extension
of
the
use
case.
C
Okay,
thank
you
both
peter
and
philip,
for
this
update
on
piccolo.
The
next
one
is
dirk
dirk
and
eve
and
john
crowcroft
organized
a
fantastic
seminar
at
dexter
about
a
month
ago,
and
dirk
is
going
to
give
us
a
teaser
on
the
report
of
this
this.
This
workshop.
E
Yes,
thanks
for
giving
me
the
opportunity,
so
I
tried
to
use
medieco's
new
slide
sharing
feature,
but
it
tells
me
that
there
are
no
slides
available.
E
E
Okay,
yeah
hi,
everybody
yeah
thanks
again,
so
this
is
a
a
quick
teaser
of
our
upcoming
report
for
this
seminar.
It's
of
course
impossible
to
summarize
a
three-day
seminar
in
20
minutes,
and
so
my
co-organizers
and
I
are
still
working
on
the
on
the
report.
E
To
be
honest,
so
this
was
a
the
actual
seminar
that
took
place
earlier
in
june,
fully
virtual,
unfortunately,
but
we
organized
it
in
a
way
that
we
almost
ran
it
around
the
clock,
which
so
put
some
particular
burden
on
colleagues
in
the
u.s
and
asian
time
zones.
So
I
think
two
days
a
bit
payback
for
the
europeans,
and
so,
if
you
don't
know
dark
studsu
is
a
computer
science
convention
center
in
a
remote
place
in
far
west
germany.
E
So
an
an
old
remodeled
castle
and
it's
normally
a
great
place
for
having
in-depth
research
discussions
in
this
secluded
place,
and
so
we
we
of
course
wanted
to
have
this
on
site.
E
But
you
know,
as
you
can
imagine,
that
didn't
work
out
was
actually
planned
for
last
year,
and
so
we
just
did
it
this
year
fully
online,
and
so
the
objective
we're
actually
quite
in
line
with
with
our
coin
research
group
topics
here
so
just
like
discussing
compute,
first
networking
so
as
a
moniker
for
a
yeah
kind
of
new
way
to
look
at
integrating
or
combining
computing
and
networking.
E
So
so
is
there
like
a
space
for
a
new
approach
for
doing
that
that
it
goes
beyond
what
we're
already
doing
so
packet
flow
processing
or
just
overlays,
and
so
the
idea
was
a
bit
learning
from
this
distributed
computing
systems,
trying
to
take
advantage
of
recent
trends
in
hardware
and
and
software
technologies
and
yeah
trying
to
shape
a
research
agenda
essentially
and
yeah.
I
think
it
was
a
really
interesting
event,
thanks
everybody
who
who
was
there
and
put
in
all
the
time.
E
So
we
we
have
like
80
pages
of
of
nodes,
that
we
are
currently
distilling.
It's
a
lot
and
many
many
really
interesting
insights.
So
today,
I'm
just
giving
you
a
quick
overview
and
to
make
you
excited
for
the
actual
report
that
is
coming
out
soon,
so
maybe
as
a
like
motivation.
E
So
there
are
many
successful
examples
of
distributed
computing
systems
that
are
used
a
lot
which
actually
the
workhorses
for
many
relevant
applications
today,
so
apache
beam
all
these
data
flow
systems
and
so
on,
they're
all
running
as
overlays,
and
so
one
way
of
looking
at
this
is
to
say:
could
you
actually
do
a
better
job
by
leveraging
the
network
to
optimize
performance,
but
also
make
it
easier
to
compose
those
systems,
so
these
systems
are
very
traditionally
operated
with
like
kubernetes,
orchestrators
and
so
on.
E
And
so,
if
you
contrast
this
simple
data
flow
model,
what
you
would
actually
like
to
have
is
the
ability
to
compose
systems
flexibly,
don't
care
so
much
about
location,
identifiers
addresses
and
and
so
on,
maybe
scale
the
system
automatically
and
and
maybe
have
a
nice
way
to
specify
them
and
then
a
nice
automated
way
to
operate
them.
E
If
you
contrast
this
with
say,
for
example,
service
function,
chaining
or
like
more
networking
minded
processing
systems
today,
so
these
are
more
assuming
that
you
are
putting
some
function
on
a
on
a
fl
on
the
packet
flow
and
you
are
somehow
able
to
do
something
useful
with
the
data,
and
so
these
are
really
different
words,
and
so
we're
kind
of
I
think,
trying
to
overcome
the
the
left-hand
side
approach
and
find
a
better
way
to
deal
with
to
to
leverage
networking
capabilities
for
for
these
relevant
distributed
computing
ideas,
okay,
yeah,
just
a
quick
pointer
to
the
seminar
and
our
virtual,
the
actual
chapel
photo.
E
So
not
everybody
is
on
there
but
yeah.
I
think
you're
gonna
see
some
familiar
faces,
including
two
coin:
rg,
coaches,
okay,
the
problem.
At
a
glance
I
mean
it's
difficult
to
capture
this
fully,
but
these
are
where
the
like
the
talks
we
had
on
the
agenda
and
if
you
have
been
to
doctoral
you
know
that.
So
there
are
many
additional
things
happening
and
the
the
value
is
often
more
also
in
the
discussions
that
these
talks
evoke.
E
Also
some
quite
specific
insights
in
say
what
is
possible
with
current
technology,
so
pear
also
introduced
his
ideas,
and
so
we
also
in
the
end
discussed
a
bit
about
what
could
actually
be
nice,
say
more
substantial
research
ideas
or
phd
topics,
for
example,
and
so
I
just
this
is
a
it's.
A
very
eclectic
collection
here
I
just
picked
some
say
highlights
that
I
think
should
give
you
some
some
idea
of
what
what
has
been
discussed
so
in
general.
Why
is
this
relevant?
E
Well,
we
all
know
in
konergy
that
applications
are
becoming
more
multi-party
and
distributed
with
like
things
like
devops.
You
would
also
like
to
constantly
update
parts
of
the
system,
so
it's
a
kind
of
somehow
loosely
coupled
but
should
still
be
operational
system.
E
And
if
you
look
at
like
the
hardware
trends
and
like
the
more
like
the
hard
facts,
basically
so
complete
computing
and
communication
on
different
cost
performance
trajectories.
So
can
you
consider,
or
can
you
leverage
this
for
building
systems
that
yeah
enough
make
the
most
out
of
these
different
of
these
two
domains?
E
And
so
I
picked
this
one.
A
use
case
topic
here
because
we
heard
about
the
others
before,
and
that
is
health
in
general,
so,
for
example,
health
sensing,
so
making
use
of
all
these
sensors
that
we
have
fitness,
sensors,
pacemakers
and
and
so
on,
and
try
to
share
the
say,
analytics
results
of
this
data
in
like
in
a
federated
system.
E
So
not
upload
everything
to
the
cloud
and
do
it
there,
but
do
it
in
a
more
distributed
way
with
like
federated
principal
component
under
this,
for
example,
or
base
inference
systems
and
in
addition,
well
now
in
the
pandemic
we
are
doing
lots
of
contact
tracing
and
there
are
different
systems
on
the
market.
Some
of
them
are
more
centralized,
others
more
decentralized.
E
So
in
germany
we
have
a
fairly
decentralized
systems.
We
are
largely
happy
with
it,
but
it
could
be
actually
better
because,
due
to
this
decentralized
nature,
well,
you
kind
of
can
actually
cannot
use
the
data
to
its
full
potential,
so
be
nicer.
If
you
could
use
the
data
to
say,
have
a
bit
more
specific
insight
so
like
prevalence
in
some
areas,
tracing
particular
virus
variants
and
so
on,
but
without
giving
up
the
the
privacy
preservation.
E
E
First
networking
ideas,
then
for
the
facts:
yeah,
if
you
look
at
the
how
like
servers
and
switches
develop
and
how
they
they
differ,
and
so
on
I
mean
we
know
that
you
can
do
a
lot
in
in
servers
and
servers
are
getting
better
and
say
more
efficient
in
access
to
the
network,
better
hardware,
support
and
so
on.
Switches.
On
the
other
hand,
are
also
getting
smarter
and
more
pro
and
programmable,
but
even
with
like
promoter
data
planes,
which
is
today
what
you
can
actually
do
with
them
is
still
quite
limited.
E
So
it's
not
a
turing
complete
programming
platform,
and
so
these
are
still
different
systems,
and
specifically,
we
looked
a
bit
into
like
p4
programming,
and
so
it's
many
exciting
opportunities,
but
also
many
many
constraints
and
difficulties.
And
so
one
thing
we,
I
think
concluded
was
that
yeah.
You
can
use
this
for
some
things,
but
not
as
a
general
in
networking
in
network
computing
platform.
E
So
the
question
is
more
like:
what
role
could
these
promobile
data
plane
systems
play
in
a
larger
system
that
also
leverage
leverages
say
more
like
what
we
call
server-based
computing
and
yeah
some
research
questions
that
we
discussed?
So
what
what
we
need
to
kind
of
enable
this
new
approach?
E
So
pierre
also
alluded
to
some
of
these
things
just
before
yeah
placing
compute
functions
in
an
intelligent
way,
optimizing
the
use
of
network
resources
and
computing
in
a
joint
fashion
so
enable,
for
example,
compute
graph
layout,
optimization
to
see
what's
going
on
in
the
network
and
so
reason
about,
say,
more
static,
metrics,
but
also
more
dynamic,
like
load,
congestion,
information
and
so
on,
and
then
optimizing
the
system
also
in
different
ways,
move
functions
to
data
and
and
in
the
automotive
discussion
that
phil
started.
E
There
was
also
the
question:
what
are
these
areas
and
that
you
want
to
maybe
consider
so,
of
course
not
you
cannot.
You
have
to
consider
that
there
are
control
loops
that
are
really
critical
that
have
to
run
in
the
car,
for
example,
so
time
sensitive
and
safety,
critical
things,
for
example,
and
yeah.
So
some
questions,
so
some
smartnics
are
also
enabling
us
to
do
things
more
efficiently
and
with
like
fpgas,
and
so
what
I
mentioned
before.
E
We
have
to
figure
out
how
to
use
like
p41
switches
for
general
computing,
at
least
to
support
it
in
a
meaningful
way
and
then
so.
The
question
is:
how
would
these
like
platforms
really
look
like
in
the
future,
and
so
how
heterogeneous
would
the
overall
system
also
be?
E
And
so
we
we
we
thought
it
might
be
quite
heterogeneous
so
because
yeah,
the
the
switch
platforms
are
really
different
from
from
like
server
or
container
platforms
and
different
topic.
E
If
you
look
at
how
to
manage
and
orchestrate
these
things
so
traditional
systems
so,
like
you
know,
nfv
and
but
also
like
humanities,
based
things,
they
are
typically
starting
from
a
like
the
old
say,
box
model
and
then
moving
towards
software,
and
so
the
way
that
we
orchestrate
systems
is
still
a
bit
influenced
by
the
this
old
notion
of
that
there's
a
bunch
of
servers
in
in
the
end-
and
maybe
it
doesn't
have
to
be
like
that.
E
Maybe
with
a
like
more
clean
state
approach,
you
could
also
arrive
at
say,
more
lightweight
design,
where
maybe
orchestration
doesn't
have
to
play
such
an
heavy
role
in
the
in
the
whole
system.
So
maybe
you
can
actually
have
a
more
like
disabled,
more
self-organized
way
in
these
systems
and
one
tool
for
that
that
we
also
looked
into
was
in
network
telemetry
and
so
basically
adding
more
information
to
the
actual
communication
that
is
used
for
distributed
computing,
for
example.
E
So,
like
feedback,
congestion
or
load
information,
these
kind
of
things
which
would
enable
the
notes
themselves
maybe
to
make
some
some
smarter
decisions
in
the
end
yeah.
Then
there
were
also
some
interesting
contributions
on
say,
rethinking,
communication
and
computation
abstractions,
more
fundamentally
so,
for
example,
starting
from
the
idea
that
you
would
always
communicate
in
a
in
a
broadcast
way.
E
So
in
a
like
secure
scuttlebutt
system,
for
example,
could
you
conceive
distable
computing
as
like
distributed
memory
side
effects
so
that
you
kind
of
this
broadcast
or
you
broadcast
the
changes
in
the
system
and
then
implement
them
on
local
nodes
so
and
kind
of
this
direction?.
E
And
yeah
coming
to
the
end,
so
there
was
a
lot
more
of
course,
but
I
just
want
to
keep
the
time
here,
and
so
my
our
professional
summary
here
is
that
there
are
trends
in
hardware
development,
so
that
are
actually
important
to
understand.
So,
like
multi-core
systems
are
also
kind
of
limited
because
of
thermal
heat
and
power
issues,
and
there
is
an
evolution
of
specialized
hardware
support
both
on
the
switching
and
the
say
server
side.
E
Then
there
are
interesting
trends
in
distributed
application
design,
so
in
general,
distributed
machine
learning
and
with
the
different
variants
and
applications
are
becoming
increasable
increasingly
distributed
and
and
multi-party,
and
there
are
some
examples
of
successful
overlay
systems
that
we
think
could
actually
benefit
from,
say.
E
A
new
distributed
system,
programming
approach,
so
ray
is
an
interesting
machine
learning
platform.
All
these
data
flow
systems.
They
are
interesting
in
a
sense
because
their
problem
is
a
bit
more
constrained,
but
it
fits
quite
well,
of
course,
also
to
to
a
networking
say,
minded
approach.
E
C
And
yeah
and
I
think
when
you
shoe
it
we're
going
to
make
sure
to
post
it
to
to
the
list.
So
people
will
know
where,
where
to
get
it
for.
C
We
should
go
directly
into
the
next
presentation.
The
other
dirk.
C
Who
will
present
on
actually
something?
That's
really
related
to
what
happened
in
the
actual,
which
is
a
project
that
is
being
a
collaboration
on
compute
first
networking,
and
I
will
let
dirk
talk
about
it.
J
Yes,
I
will
spare
you
the
video
two
o'clock
in
the
morning
here,
so
you
don't
want
to
see
my
face
right
now.
Computer
is
not
working
yeah
as
us.
As
I
mentioned,
projects
has
been
set
up
between
huawei
bt
and
cambridge
university
cfn
project
team
includes
you
know,
john
grockhoff,
richard
morty,
phil
and
and
and
peter
willis,
on
the
on
the
bt
side
in
the
read
and
a
bunch
of
phd
students
at
cambridge
university.
J
J
If
you
will,
we
investigate
use
cases,
that's
what
we
started
with
the
project
started
with
some
delay,
jeffrey
who
my
co-chair.
He
was
grouped
in
setting
up
the
project
handed
this
over
to
me
last
year
and
and
that
led
to
some
delays
together
with
kobe,
so
we
started
effectively
at
the
beginning
of
the
year
end
of
last
year.
Roughly
so
we
are
currently
in
the
use
case
stage
that
led
to
requirements
for
node
and
network
technologies,
which
we
gathered
so
far.
J
Next
steps
will
be
developed,
key
networking
and
no
technologies,
and
also
to
provide
a
demonstration
of
key
benefits
for
a
selected
use
case.
So
this
presentation
today
is
about
the
use
case,
part
and
and
fitting
the
agenda
generally
of
the
of
today's
meeting.
We
apply
the
taxonomy
for
the
use
cases
that
we
looked
at
and
and
first
you
know,
obviously
looking
at
a
brief
description
of
the
use
case.
J
What
are
the
services
that
are
used
here
and
provided
in
that
use
case?
What
are
the
drivers?
What
drives
the
need
for
the
solutions
for
you
know
in
the
particular
use
case,
and
we
also
had
aspects
which
I
will
not
show
today
expected
economic
value
with
references
to
market
reports
time
to
demand.
So
when
do
we
require?
You
know
when
the
required
solution
that
will
rely
upon
some
of
the
novel
features
we
identified
in
short,
mid
and
long
and
low
them,
mainly
in
the
mid
and
long
term.
J
As
you
can
imagine,
stakeholders
in
terms
of
driving
adoption,
fancy
use
organizations,
service
providers,
operators,
but
also
sometimes-
and
you
probably
would
notice
that
in
the
first
use
case,
maybe
even
also
regulatory
bodies,
that
could
that
could
be
a
key
stakeholder
in
these
use
cases.
J
So
the
first
one
we
looked
at
is
distributed
data
storage
as
a
meta
use
case.
If
you
will,
because
it's
been
utilized
in
some
of
the
other
use
cases
as
well,
so
there's
a
need
for
a
vendor
and
a
service
provider,
independent
data
ecosystem
marketplaces
that
are
open
to
all
at
a
reasonable
customers.
Low
entry
barriers
for
storing
data.
J
The
servers
here
are
distributed,
consensus
systems
or
dlts
to
to
some
extent
that
provide
discovery,
functionality
to
match
constraints
such
as
hyper
capabilities,
utilize,
security
mechanisms,
hash,
algorithms
being
used
or
the
proof
of
work
pattern,
that's
being
utilized
and
then
and
and
so
allowing
me
to
find
and
discover
the
right
minors
that
match
those
constraints
and
perform
the
transactions
which
includes
the
multi-point,
sending
of
constraint-based
requests
to
a
selected
group
of
miners
and,
ultimately,
the
actual
computation
over
a
pattern.
A
workproof
mistake,
or
some
of
the
other
mechanisms
that
are
being
utilized.
J
J
Here
in
europe
is
quite
or
the
the
ec,
as
the
the
executive
body
is
quite
heavily
pushing
for
data
sovereignty
of
eu
companies,
eu
vendors
and
eu
nations,
combined
with
the
growing
opposition
to
centralized
and
foreign
old
data
on
data
platforms.
J
Kovit
has
brought
some
of
these
issues
to
light
last
year
in
places
like
in
germany,
for
instance,
growing
value
in
data
gathering
and
sharing
data
is
an
important
asset
right
and
then,
and
hence
data
storage,
and
maybe
an
illustrative
manner
becomes
crucial.
As
I
mentioned.
Also
regulations
such
as
gdpr,
which
pushes
things
like
locality
and
quite
heavily
and
and
also
purpose
oriented
storage,
is
one
of
the
you
know
the
key
issues
as
a
data.
J
So
one
of
the
other
use
cases
we
looked
into
is
transportation
v2x
that
overlaps
with
what
phil
presented
before
for
for
piccolo.
J
J
So
the
servers
here
are
general
av
communication,
computational
cores
sensor,
gathering
infusion
fusion
and
some
of
this,
the
ones
we
specifically
listed,
which
again
also
partially
overlapped
with
what
phil
presented
for
the
piccolo
version
before
is
positioning
through
distributed.
Ai
localized
object,
recognition
for
level
5
driving,
localized,
dynamic,
high
precision,
maps,
predictive
accident
avoidance,
predictive
vehicle
flow
management
and
the
virtual
black
box.
That
would
be
boxing
block
and
there
are
some
of
the
example
services
that
use
a
mixture
of
the
of
the
air
communication
computation.
J
Of
course,
I
mentioned
before
the
the
the
key
in
all
of
these
services
to
provide
a
reaction
capacity
to
the
virtualized
services
with
a
fast
rerouting,
so
that
we
have
high
dynamics
that
that
may
happen
between
some
of
the
distributed
service.
Endpoints
your
these,
the
the
the
needed
capacity
for
filtering
and
pre-processing.
J
That's
happening
data
privacy
and
access
control
aspects
in
these
services,
which
which
links
to
the
distributed
data
storage,
use
case,
as
I
mentioned,
on,
where
to
actually
access
the
data
and
how
to
localize
data
always
connected
to
the
best
service
and
then
best
service
here
in
in
the
air
quotes
that
are
not
shown
here.
J
Computer,
where
traffic
steering
is,
is
a
key
aspect
and
to
ensure
that
the
service,
that's
the
traffic,
is
being
routed
to
is,
for
instance,
not
overloaded
or,
as
short
as
delay
possible,
depending
on
what
service
we
are
talking
about.
These
are
some
of
the
key
aspects
in
the
services
that
we
observed.
J
Drivers
generally
is
is:
are
the
the
level
five
driving
requirements
in
the
various
worldwide
regions?
There
are
some
differences,
but-
and
you
know
overall,
they're
they're,
they're
rather
similar,
and
you
find
these
services
across
all
of
those
requirements.
If
you
will,
the
last
area
where
I
defined
are
digital
twins
here,
as
a
quote
from
the
digital
twin
consortium.
Digital
twin
is
a
virtual
representation
of
real-world
entities
and
processes
synchronized
at
a
specific
frequency
and
fidelity.
J
So
it
consists
of
data,
computational,
representational
models
and
service
interfaces.
The
the
the
services
are
similar
to
the
to
the
to
the
v2x
we
saw
before
so
again.
You
know
generally
av
communication,
computational
cores
and
and
sends
the
data
in
and
fusion,
but
maybe
with
somewhat
different
goals,
if
you
will
in
the
digital
twin.
J
So
you
have
a
lot
of
data,
retrieval
and
storage
aspects,
and
some
of
the
solution
that
I
looked
at
in
the
various
consortia
oregon,
as
mentioned
before
very
heavily
based
on
dlt
solutions
distributed
ai
computations
for
the
models
using,
for
instance,
grpc
or
rdma
based
invocation
models.
J
J
You
know,
and
that
means
to
often
need
to
optimize
communication
as
well
as
a
computation
pipeline.
The
example
for
that
is
a
live
feed
av
with
a
feature
with
a
feature
extraction
and
centralized
unit
where
the
computation
pipeline
optimization
may
be
more
important
than
the
communication.
Optimization.
J
In
that
case,
drivers
here
are
again
several
vertical
industries,
such
as
manufacturing,
automated
mode
of
supply,
chain,
aerospace,
defense.
You
know
that's
a
very,
very
long
list,
there's
an
emergence
of
a
number
of
industry
initiatives
I
mentioned
before
the
digital
twin
consortium
was
already
voted
before
the
the
the
the
idta
and
gaia
axon.
Some
of
the
other
initiatives
that
work
in
various
aspects
gaiax
very
much
on
aspects
like
data
center,
interconnect
aspects,
the
data,
storage,
obviously
aspects
as
well
there.
J
J
The
announcement
of
computation
within
and
across
domains
also
the
need
for
delegated
announcement,
and
then
maybe
even
pre-announcements
are
described
in
or
are
are
being
derived
from
some
of
the
use
cases.
The
interconnection.
A
lot
of
these
communications
we
see
happening
and
and
what's
captured
in
rfc
8799,
is
limited
domains
that
are
very
stakeholder
specific
some
of
the
stakeholders
that
I
mentioned
before
in
the
vertical
industries
that
are
involved
in
the
initiatives,
but
there
is
obviously
a
requirement
to
to
interconnect
across
limited
domains
where,
in
particular,
the
public
internet
comes
into
play.
J
Also,
requirements
too
that
allow
us
to
bind
to
available
computational
instances
on
the
dynamic
constraints
both
and
pair
talked
about
this
before
in
their
talks
around
you
know,
being
able
to
build
dynamic
relationship
between
the
services
as
they
are
being
deployed
in
the
network.
The
constraints
here
may
change
and
obviously
vary
per
use
case,
but
may
also
change
over
time,
and
hence
these
bindings
are
may
exhibit
dynamic,
binary
behavior
as
well.
Collective
communication
patterns
that
are
potentially
even
request
specific
can
be
found
in
a
number
of
the
use
cases.
J
Undistributed
reasoning
that
I
mentioned
before
and
an
ipv6
support
for
making
this
work
work
in
an
ipv6
environment
through
suitable
potential
extensions
is
another
requirement
we
derived
some
of
the
requirements.
I
wanted
to
note
that
here
to
make
a
link
is,
they
are
already
captured
in
the
existing
current
use
case
draft
image,
as
I
mentioned
before.
J
That
brings
me
then,
finally,
to
the
next
steps
so
for
coin,
and
the
next
steps,
from
from
from
our
side,
is
to
consider
the
integration
of
use
case
and
requirements
in
the
existing
use
case
draft
that's
quite
easy,
given
that
I'm
one
of
the
co-authors,
I'm
I
wanted
to
bring
this
to
discussion
with
the
other
co-authors
for
the
next
iteration.
To
maybe
bring
some
of
the
insights
from
the
cfn
work
in
there
to
see.
J
What's
missing
is
that
some
of
the
use
cases
are
already
in
there
in
part,
so
it
should
be
quite
easy
for
the
cfn
project
I
mentioned
in
the
in
the
objectives
before
is
to
pick
a
use
case
of
choice.
We
haven't
done
that
yet
so,
which
one
would
be
worked
with
in
in
terms
of
the
system,
architecture
and
the
demonstrator,
and
the
demonstrators
plan
for
for
next
year
to
system
architecture.
Work
will
happen
this
year.
So
that's
something
maybe
also
for
another
update
to
the
group
once
we
have
more
to
share
good.
Thank
you.
C
Yeah,
thank
you
very
much.
We
well.
I
I
think
it's
a
great
idea
that
you
want
to
include
this
into
the
use
case
document
and
by
the
way,
we're
going
through
a
lot
of
use
cases
in
this
meeting,
and
I
think
the
the
goal
of
the
the
minutes
will
be
also
to
you
know,
organize
them
in
such
a
way
that
we
have
a
better
view
of
all
of
this
and
move
a
lot
of
discussion
to
the
to
the
list.
C
A
Consortium,
so
I
wondered
if
you
think
that
some
of
those
places
are
organizations
that
we
should
hear
from
due
to
their
interesting
perspectives
that
are
either
adjacencies
or
overlap
with
some
of
the
goals
here.
What
what
are
your
thoughts
there,
and
and
does
the
cfm
project
also
interact
with
those
consortium
as
well.
J
Well,
the
cfn
project
only
through
the
partners,
you
know
indirectly,
so
not
as
a
project
itself
use
the
project
itself
as
a
research
project.
We
have
some
involvement
in
some
of
the
the
the
initiatives
and
gaiax
is
something
where
huawei
is
very
active
in
one
of
the
ones
that
I
didn't
mention.
I
don't
know
exactly
why
I
left
them
out
because
it
might
be
quite
interesting.
It's
the
iic,
the
industrial
internet
consortium,
where
the
industrial
ledger.
J
I
have
to
use
the
right
word
task
group.
I
think
it
is
it's
currently
preparing
a
white
paper
that
may
be
quite
interesting.
That's
something
that
we
initiated
in
that
particular
group
where
we
presented
some
work
on
the
impact
of
dlts
on
networks
and
and
that
has
to
do
with
the
way
dlts
work.
We
specifically
looked
at
ethereum
as
one
of
the
examples
there
and-
and
you
know,
does
it
really
work
the
way
we
expect
it
to
work?
What
are
some
of
the
issues
that
are
caused
by
the
involved
network
technologies?
J
Are
there
any
ways
to
improve
on
that?
That's
a
bit.
We
we
picked
the
irc
because
it
had
that
dedicated
ledger
group,
but
I
think
that's
maybe
also
something
and
then
and
the
legends
themselves
are
quite
important
in
the
isc
because
of
the
aspect
of
again
data
storage
and-
and
these
are
maybe
some
of
the
discussions
that
may
be
quite
interesting
for
this
group
as
well.
So
I
think
that
in
the
data
space
there
may
be
something.
A
In
fact,
there
was
a
question
on
the
chat
window
from
chris
about.
Were
you
targeting
file
coin
in
one
of
your
slides
when
you
were
speaking
about
the
ledger
technology.
J
Well,
we
started
with
ethereum,
just
because
of
you
know
it's
quite
easy
for
us
to
study.
I've
also
looked.
I
actually
had
a
call
with
janus
a
couple
of
months
or
weeks
ago,
and
ficond
is
another
one
because
of
some
of
the
changes
that
we
find
is
quite
interesting
as
an
evolution
from
ethereum
to
firecoin.
So
it's
not
specifically
targeted
falcoin
and
all
these
platforms
are
all
part
of
this
distributed:
data,
storage
and
and
and
also
the
question
of
how
the
network
can
help
with
any
of
that.
J
So
some
of
the
the
the
problems
we
outlined
in
the
operation
of
ethereum
really
have
to
do
with
some
aspects
of
of
of
net
network
issues
that
we
all
know
like
non-routability
to
miners.
Even
the
miners
have
announced
themselves
and
and
the
associate
problems
that
has
right
on
the
operations
of
the
of
the
dlt
now
file
coin
has
specific
mechanisms
to
tackle
some
of
those
at
the
overlay
level,
which
is
the
reason
why
we've
also
looked
at
it
as
a
as
a
potential.
A
I'd
be
curious
to
understand
pointers
to
some
of
those
consortia.
If
you
want
to
post
it
to
the
group
as
a
follow-up
on
the
email,
that
would
be
wonderful.
Thank
you.
Yeah.
C
Okay,
so
thank
you
sorry,
sorry
for
talking
over
you,
okay,
so
thank
you
very
much
dirk.
The
next
is
at
first
it
was
supposed
to
be
the
presentation
of
a
draft,
but
when
we
started
talking
to
seville
we
figured
out
it
was
also
a
new
use
cases
and
your
use
case,
and
it
was
it
is
for
mobile
virtual
networks.
So
xavier,
please
go
ahead.
L
But
actually
this
use
case
uses
three
well-known
technologies.
You
know
p4
for
data
plane,
programming,
5g
has
underlying
network
and
5g
lan
as
virtualization
technology,
and
the
the
goal
is
to
initially
study
the
the
problem.
You
know
without
too
many
moving
parts,
because
all
those
technologies
are
known,
but
this
is
a
starting
point
and
we
can
look
at
expanding
the
programming
aspect
and
look
at
you
know
how
p4
can
be
used
for
you
know
more
more
things
than
than
than
today.
L
L
First
data
plane
programming
can
be
used
by
tenants
to
control
increasingly
complex
virtual
mobile
networks,
so
we
need
to
cope
with
the
evolution
of
future
mobile
networks
towards
more
and
more
customized
environments.
So
we
can
expect
very
diverse
and
granular
network
services,
including
computing
services,
and,
I
would
say,
beside
besides
handling
complexity.
L
L
So,
let's,
let's
look
at
the
high
level
description
of
the
use
case,
so
the
5g
lan
connects
group
member
devices.
So
here
we
have
the
mobile
devices
ue1
to
ue4.
L
L
So
now
we
we
can
look
at
some
aspects
of
the
underlying
network
right.
So
two
points
mostly
I
mean
first,
you
know
the
the
p4
program
or
it
needs
to
be
deployed
on
physical
nodes
and
possible
locations.
Include
you
know,
mostly,
these
user
plane
functions,
so
you
have
anchor
user
blade
functions
represented
here
which,
which
are
the
at
the
edge
of
the
5g
domain,
or
you
can
have
intermediate
upfs
as
well,
and
you
can
even
you
know,
deploy
some
p4
programs
or
fragments
or
programs
on
mobile
nodes.
L
L
The
the
green
path
between
ue,
1
and
3
is
passing
through
a
tunnel
between
two
user
plane
functions
and,
and
finally,
you
have
another
mode
of
operation
where
the
the
path,
the
data
path
leaves
the
5g
domains
go
through
an
external
data
network
and
re-enters
the
5g
domain
later
on.
L
So
the
that's
you
know
that
leads
us
to
the
requirements
and
research
challenges.
The
first
p4
programs
will
need
to
be
splitted
or
distributed
because
data
paths
are
distributed
with
no
central
node
in
the
general
case
right.
So
this
can
be
an
input
to
the
coin.
Rg
draft
on
p4
distribution,
which
is
in
reference
here
in
particular
in
in
this
use
case.
You
know,
distribution
is
not
for
scaling
purposes
or
performance.
L
It
is
needed
to
process
all
packets
right
and
a
second
point
is
a
multi-multi-tenancy
support
or
second
requirement.
I
would
say,
multiple
5g
lands
can
share
the
same
infrastructure
and
the
mtpsa
paper
mentioned
earlier,
and
other
related
studies
provide
useful
solutions
in
this
space
and
the
fact
that
programs
may
be
distributed
and
multi-tenant
at
the
same
time
could
also
add
some
additional
challenge.
L
A
third
point
is
about
mobile
network
awareness.
Like
a
p4
program
could
interact
with
a
mobile
network
system
and,
for
example,
it
could
learn
parameters
associated
with
the
flow.
It
could
also
influence
the
processing
overflow
by
the
5g
system.
So,
for
example,
based
on
some
processing,
you
could
select
a
particular
slice
or,
for
example,
and
a
fourth
point
or
fourth
requirement
is
mobility.
L
Support,
since
you
are
in
a
mobile
system,
p4
programs
would,
in
this
case,
need
to
migrate
in
order
to
follow
data
flows
when,
when
mobile
devices
move
from
one
attachment
point
to
the
to
the
next
and
finally,
there
are
security
risks,
including
overusing
network
resources,
injecting
traffic
and
accessing
you
know,
traffic
from
other
users
right.
So
here
I
like
just
to
summarize
data
plane
programming,
you
know
in
this
use
case
is
a
way
for
tenants
to
control
virtual
networks
over
various
underlying
networks
and
as
a
starting
point
we
can
use.
F
L
L
And
a
second
question
is
you
know,
if
are
there
people
interested
in
developing
some
aspects?
L
So
we
you
know
related,
for
example,
to
new
data,
plane,
programming
approaches
or
maybe
improving
p4,
as
that
has
been
mentioned
in
the
actual
seminar
summary,
and
also
you
know,
and
after
that
you
know
also,
we
could
look
at
other
underlying
networks.
You
know
data
centers,
especially
and
of
course
we
could
study
the
impact
on
those
changes
under
the
requirements
and
challenges.
L
So
please,
let
me
know
if
that's
the
case
and
in
any
case
thank
you
for
your
attention.
C
Thank
you
very
much
davey.
I
can
summarize
a
little
bit
with
was
discussed
with
the
use
case,
authors
that
maybe
this
could
be
one
of
the
use
cases.
Then
that
would
you
know
we
would
need
to
talk
about.
Does
it
still
need
its
own?
Its
own
draft
there's
also
a
question
from
dave
oren.
Do
you
see
the
question
in
the
chat
zavier.
C
Yep
so
the
question
I
will
read
it
is,
he
says,
doesn't
want
an
answer
now,
but
maybe
you
can
give
ideas
of
the
answer
about
a
p4
virtual
5g
network.
He
says
that
we've
had
virtual
router
technology
on
conventional
routers
for
20
years,
but
nobody
could
figure
out
how
to
deal
with
the
required
isolation,
properties
or
the
ability
to
properly
soft
or
hard
partition.
The
resources
and
actually
the
question
is-
is
before
making
this
easier
or
harder.
C
So
maybe
you
can
just
give
a
few
a
few
pointers.
C
We
can
move
that
to
maybe
a
to
the
list
or
two
personal
communications,
because
I
would
like.
L
To
move
next,
no
thank
you
for
the
question
and
I
think
that's
that's
a
good
question
and
when
you
speak
about
bringing
p4
program
on
the
data
program
inside
the
mobile
network,
you
don't
necessarily
I
mean
it's
not
easy
to
get
to
get
positive
feedback,
because
it's
a
very
controlled
environment
and
that's
that's
seen
as
dangerous.
L
But
so
I
think
that
you,
you
would
need
some
form
of
interface
between
I
mean
the
integration
between
the
mobile
network
and
the
and
the
p4
switch
would
need
to
be
controlled
and
under
the
control
of
the
of
the
operator.
In
order
to
to
to
avoid
the
security
issues,
I
didn't
think
I
didn't
see
any
big
blocking
issue,
but
I
may
not
have
thought
of
everything.
So
thank
you.
A
And
our
next
talk
is
chris
who'll,
be
speaking
to
us
in
fact
giving
us
a
preview
of
his
paper.
That's
been
accepted
to
infocom
2021
and
chris
you're
welcome
to
start
sharing
your
screen.
While
I
give
a
little
bit
of
background
about
you,
chris
is
a
phd
student
who's
finishing
up
this
year,
so
for
all
of
you
out
there
who
are
looking
for
talented
technologists
chris
will
be
on
the
market
shortly.
A
His
defense
is
likely,
sometime
in
the
december
time
frame
and
he's
looking
for
a
position
in
research
in
r
d,
mainly
an
industry
but
potentially
academic
positions
as
well.
A
That
are
closely
linked
with
industry
and
we'll
hear
about
his
work
around
send,
but
he's
also
looking
in
the
future
to
be
focused
on
storage
within
edge-based
network
systems
and
specifically
towards
service
and
or
performance
efficiencies
and
optimization,
and-
and
I
just
want
to
underscore
that
this
is
exactly
the
kind
of
talk
that
we
really
enjoy
hosting
so
for
all
of
you,
other
academic
advisors
or
technologists,
who
know
about
really
interesting
research
groups.
A
M
M
So
yeah
I
developed
send
basically
as
a
kind
of
an
adaptation
on
storage
towards
a
better
transmission
and
servicing,
basically
of
data
towards
the
clouds
and
edge,
basically
so
yeah
within
the
presentation,
I'll
go
through
the
introduction
of
vegetator
repositories,
which
were
basically,
though,
before
this
paper
was
done,
just
as
a
means
of
storage
at
the
edge
and
to
basically
make
an
introduction
to
storage
rather
than
caching,
then
we'll
go
into
the
into
how
send
was
designed
basically
introduced,
we'll
introduce
the
labels,
which
are
mainly
the
metric,
the
main
metric
for
the
whole
system,
we're
going
to
like
basically
some
data
categories
and
strategies
that
developed
from
the
labels
and
that
enable
the
system
to
use
these
labels
to
get
well
the
the
best
performance,
basically
that
it
could,
at
least
in
my
opinion
at
that
point,
then
we're
going
to
some
developments
basically
into
how
these
packets
are
generated.
M
You
have
the
evaluation
itself
will
be
we'll
be
going
through
it
just
briefly,
so
you
understand
basically
its
impact
and
then
we'll
go
into
some
future
work
and
we
can
discuss
afterwards
so
actually
like
this,
the
edge
data
repositories
concept
was
partially
inspired
by
needs
work
in
reverse
cdn,
basically
as
a
concept
as
a
as
a
blue
sky
concept.
M
At
that
point
and
yeah,
like
I
started
working
from
this
paper
and
janice's
mobile
data
was
placed
at
the
edge,
and
then
I
developed
a
store
process
send
system
which
effectively
incorporates
the
idea
that
you,
whenever,
whenever
the
system
gets
some
requests
from
the
edge,
it
will
always
store
the
data
that
it
needs
and
whatever
future
requests
come,
it
will
be
able
to.
M
It
may
be
able
to
satisfy
some
of
the
requests
with
the
store,
the
data
that
is
already
in
the
system,
thus
being
able
to
actually
provide
very
good
servicing
in
some
cases
and
some
it
may
not,
but
as
as
it
showed
up
until
this
point,
it's
actually
it
can
improve
computing
systems
by
quite
a
lot.
So,
as
we,
you
know,
as
as
everyone
here
knows,
any
computing
system
is
generally
made
of
at
least
like
a
processing
like
processor,
main
memory
and
io
right.
M
M
So
this
important
part,
in
my
opinion,
at
least
of
computing.
It
was
left
off
and
was
left
out
until
recently,
which
is
storage
and
yeah
the
this
needed
to
be
addressed.
At
that
point,.
M
Of
course,
making
the
differentiation
between
cache
and
storage-
that
is
sorry.
Oh,
I
accidentally
skipped
a
few
slides
there.
Okay,
so
yeah
I
developed,
store
processing
for
that
reason,
which
is
like
it
was
developed
for
the
reverse
flow.
As
I
said,
of
data
and
for
service
adaptability,
then,
for
this
to
be
implemented,
I
used
a
two
definitions
of
timing
for
data
to
be
stored,
which
is
the
freshness
period
and
shelf
life.
M
These
were
basically
the
first
period
would
be
would
represent
the
deadline
for
the
execution
of
a
time,
sensitive
processing
function
and
the
shelf
life
refers
to
the
maximum
storage
period
for
the
maximum
window,
for
the
data
to
be
processed
by
a
function
at
the
edge.
M
So
I
developed
storage
network
data
at
this
point.
Basically,
we
had
to
have
a
median
storage
environment
which
is.
M
Basically
drives
the
data
from
the
bottom
towards
the
top,
while
having
services
at
the
center
of
the
model,
so
it
could
interact
very
easily
with
data
providers,
which
are
also
like
application
providers,
basically
managing
at
least
some
of
the
data,
if
not
most,
of
the
data
generated
in
at
the
edge,
while
also
being
able
to
provide
a
secure
and
well
useful
storage
environment
for
any
of
the
other
entities
within
the
system.
M
This
is
so
this
represents.
The
schematic
basically
represents
an
internal
algorithm
for
the
for
how
storage
would
work.
Basically,
before
this,
the
it
was
pretty
dumb
the
system,
then
we
implemented
some
more
intelligence
in
the
fact
that
it
started
having
some
feedback
loops
and
it
started
looking
at,
and
this
was
already
implemented,
which
is
basically
function,
instantiation
and
a
feedback
loop
for
feedback
for
that
it
was
towards
the
cloud
and
it
was
implemented
without
storage
beforehand.
M
So
yeah
the
labels
themselves
offer
the
ability
to
track
performance,
the
property
of
data
and
the
data
placement
statistics,
which
is
so.
This
is
on
top
of
the
firstness
period
and
shelf
life,
and
it
provides
a
means
for
other
strategies
to
be
implemented
on
top
of
it,
so
that
the
system
can
actually
use
them
to
their
to
its
advantage
and
to
those
to
the
certain
user's
advantage.
Of
course
it
makes,
of
course
it
makes
the
system
aware
of
the
data
context
and
it
can
improve
performance
exactly
because
of
that.
M
So
there
are
there's
more
potential
in
that
and
we
are
well
aware
of
that
and
we're
working
towards
that.
Basically,
at
the
moment,
they
also
enable
accurate
data,
placement
and
storage
decisions
because
they
can
be
used
for
the
above
reasons,
basically
to
to
be.
M
To
both
replace
them
to
duplicate,
basically
the
data
depending
on
where
it's
needed
so
yeah
they
they
are.
The
most
labels
themselves,
are
the
most
essential
part
of
the
system.
M
Thus,
from
here,
we
basically
decided
that
we
have
we.
We
have
to
have
a
structure
within
our
date.
How
should
I
say,
data
population
right?
So
all
the
data
are
individually
hashed.
They
have
different
identifiers,
but
at
the
same
time
they
all
come
with
labels,
which
can
basically
classify
the
data
and
make
it
easier
for
different
entities
within
the
medium
to
both
identify
and
use
the
data
and
more
effectively.
M
So
then
we
decided
that
okay,
so
like
these,
these
types
of
data
have
to
be
of
different,
I
should
say,
should
have
different
scopes,
so
storage
related
function
related
and
then
offload
bound
data,
basically
of
like
cloud-bound
data.
M
We
then
had
a
prototype
evaluation
that
was
using
a
google
file
system
for
timings
and
like
local,
basically
local
timing
assessment
that
that
was
specifically
for
getting
good
and
accurate
measurements
with
regards
to
task
completion.
Basically,
then,
at
the
scaled
up
network
simulation
study
level,
so
yeah,
we,
we
got
really
good
results.
M
Basically,
data
insertion
time
was
of
between
0.6
0.06
milliseconds
and
0.9
milliseconds,
which
is
yeah
like
arguably
good
for
some,
not
maybe
not
for
all
applications,
but
at
least
for
some
and
look
up
times
that
are
also
comparable
to
that
and
yeah.
As
use
cases,
we
used
some
v2s
applications
as
well,
actually
so,
and
and
for
that
we
got
92
user
requests
satisfied.
Basically,
the
the
the
datasets
were
realistic.
M
There
were
like
four
sorry.
Three
data
sets
one
synthetic
and
two
cloud-based
like
data
center
based
datasets,
and
we
used,
as
I
was
saying
earlier.
We
use
the
baseline
for
comparison,
a
non-storage,
but
a
function,
feedback
loop
based
application.
That
was
oh
sorry.
I
accidentally
went
out.
F
M
C
I'm
in
the
line,
but
I'll
I'll
get
out
of
the
line,
since
I'm
I'm
hosting
this
eve
has
something.
C
Go
ahead,
I
will
ask
my
question
after
dave,
your
question
is
part
of
the
discussion.
That's
ongoing
right!
It's
not
related
to
this
presentation.
C
Okay,
eve
go
ahead.
A
I
had
a
couple
questions.
I
was
just
scrolling
back
in
the
chat
window.
A
My
first
question
was
who
or
what
creates
the
data
labels
so
that
it
can
be
processed
better
as
parts
of
functions
or
services
in
the.
M
Network,
okay,
so
basically
it's
the
applications
and
like
that
is
the
main
creator.
That
is
so
basically
when
you,
when
a
data
provider
sends
a
pub
sends
a
an
iot
device.
Let's
say
right
at
like
at
the
edge
at
the
at
the
you
to
user
and
use
a
user.
M
A
A
You
know,
and
so
I
was
curious
about
also-
and
I
suspect
your
answer
is
likely
around
you
know
the
application
tells
you
so,
but
but
where,
how
how
do
applications
tell
or
functions
for
that
matter
tell
in
what
direction
your
data
is
bound
like
to
where
what
is
the
actual
eventual
migration?
A
M
So
that's
that's
different.
I
guess
I
didn't
get
it
through.
Basically,
then,
the
main
idea
here
is
that
you
know,
like
your
application,
is
generating
data
and
it
may
need
a
certain
data
to
be
queried
at
certain
points
in
time.
It's
not
about
where
the
data
is
found.
It's
about.
Basically
it
it
can
be
stored
within
any
kind
of
environment
right.
That's
the
whole
idea
you
wanted!
You
want
it
stored
within
the
network,
whether
it's
in
a
data
center
or
locally.
M
A
But
then
you
were
also
saying
you
know:
okay,
if
data
gets
generated
as
a
function
of
a
of
a
function
as
the
output
of
a
of
some
kind
of
computation
and
it
doesn't
fit
in
your
repository-
is
there
you
know,
I
presume,
with
this
federation
of
repositories
that
something
advises
or
recommends
where
you
might
migrate,
your
data
to
that
might
have
space
or
what?
How
to
take
action,
or
you
know
in
all
of
this.
M
Yeah
yeah,
so
it's
like
part
of
the
work.
I
mean
it's,
not
it's
not
that
I
didn't
implement
it
before.
It's
just
that
so
like
beforehand,
I
just
sent
it
to
other
repositories.
Basically,
whenever
not
enough
storage
would
be
available,
but
in
like
at
the
start,
even
with
quite
high
like
shelf
life,
they
would
still
live
for
quite
a
long
time
and
still
have
quite
a
bit
of
storage
empty.
M
So
it
was
good
like
and
I
had
actually
pretty
low
storage
associated
with
each
repositories,
and
that
was
yeah
like
it
was
satisfying
to
see.
A
All
right:
well,
if
you
can
get
through
the
presentation
of
a
talk
at
this
late
hour
in
your
time
zone,
anything
is
easier.
I
imagine
so
good
luck
with
the
infocom
presentation,
it'll
be
in
a
more
reasonable
time
zone,
hopefully,
for
you.
C
No,
I
will
skip
it
was
about
assumptions,
but
it's
probably
in
the
paper.
So
I
I
actually
read
the
paper
once
but
yeah.
Actually,
maybe
I
can
ask
a
very
quick
question
actually
for
once
we're
ahead
of
schedule
in
what
are
your
assumptions
about?
That
was
my
question
actually
because.
C
Directly
into
details
of
the
implementation,
but
what
is
the
the
assumptions
that
you
have
on
the
underlying
capabilities
of
the
network
and
of
the
storage.
M
Right
so
I
actually
used
like
so
I
use
the
sim
simulations
and
I
use
the
acres.
I
don't
know
if
you're
aware
of
it
and
like
I
don't
know,
I
had
the
different
I
did.
I
had
different
type
requests.
M
Generation
amounts
basically
per
per
second,
that
was
depending
on
the
data
sets
mainly
they
were
like
up
from.
I
don't
know
100
or
like
not
a
thousand
up
to
like
almost
10
000
requests
per
second
16
data
repositories
distributed
differently.
The
requests
were
diff
were
distributed
depending
on.
C
I
think
I
think
it's
not
that
type
of
assumptions.
I
think
it's
more
about
you
know
and
what
type
of
operational
system
would
this
be
applicable,
but
we
can
take
it
off
languages
and
for
once
that,
we're
ahead
of
time.
I
want
to
keep
ahead
of
time.
Yes,
if
you
could
answer
that
question,
you
know
what
what
type
of
operational
system
would
would
use
this,
which
is
something
I'm
interested
in
anyway.
C
Yeah,
let's,
let's
move
it
offline
so
that
again,
I'm
so
happy
with
you
right,
yeah
yeah!
Let's
move
this
to
the
email
list.
If
you
could
answer
to
the
email
list
by
the
way,
that
would
be
great,
so
yeah
we're
doing
well
in
terms
of
time,
I'm
so
you
know
so
happy
about
it.
You
were
usually
so
usually
so
much
late.
Okay
ike
is
going
to
talk
to
us
about.
Like
I
had
we
I
had.
C
We
had
decided
that
we
would
not
really
talk
about
drafts,
but
since
this
was
a
use
case
directed
meeting,
we
decided
to
have
like
maybe
short
presentation
and
the
only
people
who
asked
for
a
presentation
was
the
use
case
draft
which
makes
sense
since
it's
related
to
the
theme
so
ike,
please,
you
have
a
five
minute,
so
I
said
these
were
going
to
be
lightning
talks.
I
Yeah
and
I'll
also
try
to
be
very
brief,
so
this
is
actually
the
first
update
that
we
give
now
that
the
draft
is
actually
no
longer
the
industrial
use
cases,
but
the
general
use
cases
for
network
computing.
I
So
basically
what
we
did
as
suggested
in
the
the
last
update
that
we
did
for
the
industrial
use
cases
was
to
include
all
the
other
aspects,
basically
from
the
other
draft.
That
dirk
was
also
working
on
and
then
we've
also.
We
also
have
two
additional
update:
traditional
contributions,
one
by
mauricio
zay
and
one
contribution
by
uc,
london
and
yeah.
So
we
are
still
getting
bigger
in
this
draft
and
having
yeah.
F
I
A
lot
of
new
use
cases,
but
right
now
we're
just
at
the
point
where
we
only
have
the
descriptions
for
the
different
use
cases,
and
only
like
roughly
following
the
scheme
that
you
can
see
here
in
the
middle,
so
also
opportunities
and
research.
Questions,
for
example,
and
what
we
really
would
like
to
do
eventually
is
to
start
analyzing
the
use
cases
so
that
we
can
then
actually
start
to
derive.
I
Basically
a
research
agenda,
maybe
or
yeah
try
to
group
the
different
use
cases
that
we
have
in
a
bit
more
sensible
way
than
what
we've
done
until
now,
and
for
that
we
are
obviously
always
looking
for
new
use
cases
and,
as
we've
seen
today,
there
are
quite
a
few
of
them
out
there
and
actually
with
xavier.
I
What
we've
heard
today
into
the
draft
and
then
also
potentially
start
with
a
rough
analysis,
maybe
or
at
least
try
to
get
the
draft
into
a
more
consistent
shape
so
right
now
it's
one
can
still
see
who
has
contributed.
What
and
it's
not
like
a
consistent
structure
in
all
places
and
that's
something
that
we
would
like
to
work
on
next
yeah
and
that's
already.
It.
C
Thank
you
so
much,
I'm
the
author,
so
I
will
probably
refrain
from
more
comments
because
we
could
do
that.
Anybody
has
questions
again.
I
will
not
be
now.
I
will.
C
It's
as
as
the
chair
or
as
the
chairs,
we
will
probably
have
a
look
at
this.
This
is
an
research
group
document.
C
Maybe
we
can
talk
to
the
authors,
including
myself,
into
again
putting
it
in
shape
and
and
start
moving
it
up
up
the
stack
because
again,
there's
two
very
important
drafts
right
now,
there's
the
one
that
essentially
defines
the
framework
which
is
yours
and
and
dirk,
and
this
one
who
actually
shows
how
the
framework
can
be
implemented
and
then,
in
our
other,
milestone
from
there
there's
other
things
that
can
actually
flow
from
there.
C
C
So
the
last
slides
are
basically
some
announcements
and
some,
I
would
say,
logistics
and
house
house
organization-
probably
a
few
of
us.
Well
a
few
of
you
have
seen
in
the
past
a
few
months.
C
I
had
posted
the
request
for
a
workshop
at
connext
and
it
was
accepted
actually
thanks
to
the
hard
work
of
a
number
of
people
who
are
on
this
call
and
a
few
who
are
already
asleep
so
fernando
ramos,
from
portugal,
edgar
ramos
from
finland,
where
I
don't
know,
if
he's
still
online,
but
where
it's
close
to
three
in
the
morning
or
more
than
that,
actually
they're,
eight
hours,
seven
hours
away
and
dirk
drossen
and
ruta,
who
is
in
munich
and
and
me,
and
we
submitted
it.
C
I'm
sorry
to
say
that
our
webs
are
true.
Beautiful
website
is
still
not
online,
but
we
do
have
already
a
crp
site.
The
paper
submission
is
september,
20th
and,
and
the
workshop
will
be
on
the
december
7th
and
again
I
would
say
this
is
a
an
effort
of
the
of
the
working
group
so
of
the
research
group.
C
Sorry-
and
I
was
very
proud
of
that,
because
this
is
one
id
that
we
had
for
the
research
group,
but
also
to
spin
off
a
few
things
if
you
saw
today,
noah
zilberman
also
posted
another
workshop.
This
one
is
much
closer
to
what's
going?
No,
it's
yeah,
it's
a
no!
It's
a
hikaton!
This
one
is
much
closer
to
to
this
time.
It's
actually
at
sig
com,
so
it
was
posted
today,
but
after
we
uploaded
the
slides,
there's
going
to
be
more
meetings.
Obviously
you
know
we're
not
done
with
this.
C
This
is
just
also
you
know
it's
been
two
years,
but
I
think
after
two
years
we've
basically
scratched
the
the
surface
of
of
you
know
everything
that
can
be
done.
There's
you
know,
as
you
can,
as
you
know,
there's
other
groups
that
are
also
looking
at.
C
Oh
then,
charlie
perkins,
just
as
I'm
talking
just
posted
that
he
has
a
meeting
tomorrow
and
in
that
meeting
tomorrow
there
is
a
presentation
on
computing,
the
network,
that
we
hope
that
we
can
maybe
include
in
our
next
presentation
and
maybe
in
october,
we'll
probably
have
will
will
be
in
madrid
virtually
or
hybridly
for
sure
and
another,
and
I
I
think
december
is
an
awfully
busy
season
for
everybody.
C
But
another
idea
that
we
had
always
you
know
kind
of
thought
about
was
to
co-locate
our
interims
with
related
conferences.
The
icn
people
have
done
it
extremely
successfully,
we'll
see.
Obviously,
of
course,
connex
is
going
to
be
virtual.
So
it's
not
that
people
already
there
and
they
can
come
to
the
meeting.
But
you
know
it's
the
theme
of
the
week.
So
maybe
if
the
theme
of
the
week
includes
some
things
related
to
coin
that,
maybe
we
can.
C
You
know
profit
from
that
and
if
we
don't
co-locate
it,
then
we
can
maybe
make
it
not
too
long
after
and
profit
from
all
the
presentations
that
will
be
presented
at
the
at
the
workshop
and
also
the
workshop.
We
intend
to
do
something
either
in
ccr
or
maybe
a
special
issue
with
some
magazine,
so
that
we
could
also
continue
to
take
advantage
of
all
the
presentations
so
for
once
we
finish
five
minutes
early,
which
is
very
rare
for
us.
So
I
thank
you
very
much.
I
posted
it
already.
C
Thank
you
very
much
for
everybody
who
stayed
up
into
the
wee
hours
to
support
this
california
people
for
once
you
you
had
it
really
good
us
it's
time
for
dinner,
so
it's
probably
not
too
bad
and
we'll
obviously
another
thing
that
we
would
like
to
encourage.
Just
that
gem
is
the
jem,
is
I'm
I'm
losing
my
english
now
I
live
in
montreal
is
to
encourage
more
discussion
on
the
list.
We
haven't
very
much
used
it
that
well,
so
I
would
like
that's
why
I
said
today.
C
Some
of
the
questions
should
go
on
the
mailing
list
and
already
some
people
have
answered,
which
is
great
and
with
eve
or
jeff
having
something
to
add.
C
C
And
thank
you
again
to
everyone
participating
and
to
make
this
group.
I
think
a
really
exciting
one,
and
actually
I
can
say
that
I'm
humbled
to
be
part
of
it.
Thank
you
so
much.
B
C
Eva,
do
we
close
this
thing
we
just
leave.
I.
C
Yeah,
I
know,
but
it
will
close
the
it
will
close
the
the
session.