►
From YouTube: IETF105-COINRG-20190725-1330
Description
COINRG meeting session at IETF105
2019/07/25 1330
https://datatracker.ietf.org/meeting/105/proceedings/
A
Hello,
hello,
everyone.
This
is
computing
in
the
network,
point
we're
going
to
get
going
to
just
a
few
seconds.
If
you
are
not
here
for
coin,
please
stay
anyway.
You
have
a
great
program
and
if
you're
here
for
coin
well
you're
at
the
right
place,
we
still
need
someone
to
take
notes.
I
had
sent
them
and
request
this
morning
and
nobody
answered.
So
if
somebody
who
is
not
presenting
could
actually
be
the
Good
Samaritan
and
take
notes,
it
would
be
greatly
appreciated.
A
C
C
Hi
I'm
Eve
Schuler,
and
these
are
my
co-chairs.
We
reach
Jose
and
Jeffrey.
Welcome
to
the
coin
working
group.
This
is
our
second
official
meeting,
our
third
gathering.
Of
course,
we
started
with
Bob
we're
still
in
proposed
research
group
mode
and
we're
grateful
to
see
so
many
of
you
out
there
and
so
many
contributions
you're
in
the
ITF
IRT
F.
You
know
this
note
well
slide.
Well
things.
You
should
know
our
data
tracker
where
all
of
our
documents
are
our
Charter.
C
Our
milestones
we're
in
the
middle
of
we're
in
the
middle
of
migrating,
basically
from
a
wiki
based
mode
of
sharing
information
to
a
get
github
document
archive
as
well
as
code
archive
and
the
coin.
Mailing
list
is
simple
to
remember:
coin
at
IRT,
F
org
and
we
have
remote
participants
today.
We're
gonna
have
a
remote
presentation
and
we've
just
appropriated
our
own
slack
workspace,
which
will
also
allow
us
to
have
interim
meetings
more
easily.
We
have
a
full
agenda.
C
We
have
basically
many
many
internet
drafts.
Many
of
them
will
be
presented
today,
at
least
the
new
ones.
I
think
we've
got
eight
at
this
point
in
time
and
five
of
which
will
be
presented
today,
in
addition
to
the
administrivia
we'll
talk
about
next
meetings
that
are
going
to
be
happening
and
I.
Think
the
only
change
to
these
presentations
is
that
Dirk
and
your
have
a
combined
merged
presentation
based
on
their
draft
yeah
I.
D
So
this
is
Jeffrey
for
those
new
to
this
group,
so
the
general
goal
of
this
group
is
to
foster
research
in
computing
network
to
improve
performance,
and
our
folks
will
be
network
architecture
and
protocols
and
by
addressing
real-time
use-case
application,
isn't
working
in
progress.
So
we
have
not
modified
the
chart
a
little
bit
after
Prague
and
June
interim
meeting
and
also
based
on
that
discussion,
mailing
list
and
thanks
for
the
contributors
and
the
participants,
so
main
change.
D
So
we
also
are
working
on
some
milestones
very
preliminary.
We
know
this
is
a
whisky
up
for
a
party,
so
it
will
not
tell
you
that
unless
we
are
approved
as
a
formal
one,
but
we
you
know,
we
hope
to
use
this
to
check
some
discussions
on
the
future
plan.
So
we
suggested
to
capture
the
still
of
art
and
attic
articulates
some
challenges.
Then
target
to
the
use
case
and
also
identifies
the
ecosystem,
dependency
and
also
the
requirements.
Then,
hopefully
we
can
have
a
specific
coin.
D
Scope
may
be
later
next
year
and
specifically
enough,
so
that
within
which
new
architecture
and
mechanism
protocols
can
be
proposed,
we
are
also
linked.
Some
links,
the
existing
individual
shaft
to
these
milestones,
just
as
related
related
rafts,
so
basically
the
input
to
the
disk
the
input
up
to
the
discussion
of
each
apart.
D
A
We
have
to
Dirk's
here,
so
we
have
Dirk
number
one
on
what
coin
is
and
we
have
Dirk
number
two
on
the
app
centers,
and
so
this
is
going
to
continue
and
obviously
collaborators
are
welcome
and
if
you
have
other
ideas
for
for
drafts,
please
supply
them.
So
we
had
our
first
hackathon
on
Saturday,
and
this
is
the
present
of
people
who
participated
and
we
had
I
think
our
hackathon.
A
Could
have
been
some
kind
of
a
train
wreck
because
p4
is
this
new
language
that
allows
you
to
program
switches
and
there's
not
a
lot
of
people
who
know
how
to
do
that,
and
we
were
very,
very
lucky
that
there
is
a
local
company
called
Novi
flow,
who
has
its
business
of
doing
p4
and
they
lent
us
two
engineers
for
two
days.
So
thank
you
so
much
and
they
actually
got
us
going
and
so
they're
around
the
table
with
the
team.
So
it
created
a
really
really
nice
team.
A
A
Obviously
we
there
was
a
there
had
been
before
hackathons
and
before
tutorials
and
other
conferences,
so
we
can
actually
take
basic
examples
to
get
everyone
on
board.
We
had
a
remote
participant
who
actually
was
the
guy
who
knew
how
to
program
this,
so
he
had
his
own
project
on
machine
learning
in
in
ipv6,
and
he
made
a
lot
of
very
good
progress
on
this.
A
We
had
I,
don't
know
if
he's
here,
but
we
had
somebody
from
liquid
telecom,
which
is
from
from
Africa,
who
recognized
that
he
could
actually
do
some
translation
of
p42
golang,
which
is
the
language
that
they
use
and
on
their
networks.
At
the
end,
we,
after
one
day
where
everybody
was
actually
kind
of
up
to
date
and
how
to
do
things.
We
had
two
participants
who
addressed
a
real
problem
that
actually
we're
going
to
face
boat
in
industrial
networks
and
in
this
XR
aar
VR
field,
which
is
actually
packet
filtering.
A
So
compare
a
packet
to
a
I
called
it.
A
the
original
idea
was
to
have
a
perfect
packet
and
then
compare
all
the
other
packets
to
it
and
make
sure
that
we
can
detect
when
there's
something
interesting.
Obviously
we
didn't
do
that.
We
started
by
comparing
addresses
so
we're,
storing
one
and
comparing,
but
just
to
tell
you
that
after
one
day,
people
were
really
really
ready
to
do
real
work,
and
we
also
made
a
list
of
ideas
and
I'm
going
to
post
that
on
the
github
I'm
going
to
we
have
the
pictures.
A
What
we
learned
is
that
the
need
to
come
prepared
is
really
good,
especially
when
you're
going
to
do
everything
in
virtual
machines.
The
the
usefulness
of
experts
in
a
field-
that's
expanding,
I,
don't
know
how
many
hackathons
are
in
the
field
where
the
language
is
not
even
fully
defined
right
now,
so
it
was
really
good
to
have
people
in
you
what
they
were
doing.
You
can
get
participants
the
same
day,
so
that
was
great.
People
were
seeing
what
we
were
doing
and
they
were
joining
the
team.
A
You
do
a
lot
in
two
days,
which
was
surprising
in
the
way
and
to
the
teamwork
was
great
and
we
keep.
We
have
now
a
shorter
mailing
list
of
all
the
people
who
participated
in
the
hackathon
to
continue
the
work
that
we
started
and
we
plan
to
have
another
one
in
Singapore.
So
if
you're
interested
you
can
join
us
so
now
we
have
a
ton
of
presentations
and
we
have
instructions
to
the
presenters
and
I
see
Dirk
saying:
oh,
you
didn't
tell
me
this.
So
this
is
a
surprise.
A
F
Thank
you
yeah.
So
the
intention
of
the
draft
that
is
mentioned
here
and
its
presentation
is
actually
to
provide
input
to
your
planning
process
and
discussion.
To
be
honest,
I
was
a
bit
surprised
to
see
such
a
I
mean
detailed
list
of
mice
on
the
it's
good,
but
no
I
mean
in
my
experience.
A
research
group
also
takes
some
time
to
to
find
directions
and
I.
Think
oh,
it's.
C
F
F
Often
you
hear
that
edge
computing,
and
this
is
a
really
fuzzy
term
and
what
and
typically
actually
means
is
extending
well
understood
and
widely
successful
cloud
computing
concepts
to
the
edge,
so
I
would
say:
there's
potentially
not
that
much
research
to
be
done
there,
so
yeah
architectures,
like
mobile
edge
computing,
so
they
sound
look
kind
of
you
know
intriguing
or
at
least
unusual,
but
this
is
essentially
just
extending
cloud
computing.
Virtualization
technologies
and
management
technologies
to
like
execution
platforms
at
the
edge
to
something
was
like
edge
systems
like
radio
base
stations,
for
example.
F
F
So
when
you
think
about
doing
similar
things
say,
maybe
in
say
less
well
controlled,
more
more
distributed,
say
internet
scenarios
that
there
could
be
an
interesting
edge
there.
So
these
over
their
systems
I
mean
that
they
benefit
from
from
all
the
internet
tools
that
we
have
transfer
protocols,
TLS
and
so
on,
but
they
essentially
treat
the
network
as
overlays
and
such
have
relatively
limited
visibility
into
the
network.
F
What
can
the
network
do
to
support
these
systems
say
more
than
more
directly
and
so
earlier,
I
think
in
previous
talks
we
talked
about
this
idea
of
doing
that
by
jointly
optimizing
computing
and
networking
resources.
So,
for
example,
if
you
want
to
make
a
computer
offloading
decision,
you
just
do
not
allocate
any
available
VM,
but
maybe
you
do
this,
based
on
knowledge
about
the
topology
congestion,
historic
performance
and
so
on,
and
so
you
have
a
system
that
basically
can
do.
Many
decisions
in
network
itself
can
maybe
also
leverage.
F
Mechanisms
they
never
actually
already
has
so
like
routing
protocols,
for
example,
yeah.
Let
me
just
glanceable
it
a
bit
quicker
because
I
said
this
so
now,
when
you
try
to
implement
this
joint
optimization,
for
example,
or
this
integration
of
computing
and
networking
they
are
different
candidate
technology.
You
could
look
at
so
so
this
function,
chaining
is
one,
so
we
have
a
talking
with
one
say
two,
but
so
something
is
basically
the
idea
that
I
arranged
say
pass
from
that
touches.
F
Normally
this
is
these
systems
are
fairly
statically
configured,
so
they
don't
change
that.
Often
you
can
reconfigure
them,
but
it's
not.
The
idea
to
you
know,
switch
back
next
hops
all
the
time,
and
typically
the
assumption
is
that
also
we
actually
don't
get
in
between
the
TCP
end
to
end
control
loop,
it's
more
like
on
a
packet
level,
what
you
do
there?
F
Okay,
so
this
is
intended
for
specific
operator,
so
called
GI.
Lon
scenarios
not
exactly
a
platform
for
testability
computing,
but
I'm
interested
to
see,
say
what
our
colleagues
have
done
in
the
next
talk.
Then
you
just
thought
about
the
p4
work
at
the
agathon.
So
there
is
this
work
on
using
programmable
data
plane
to
achieve
in
network
computing,
so
marco
also
has
agency
at
earlier
meetings.
Well,
this
is
the
idea
that
you
can
implement
some
application
logic,
for
example,
in
slightly
more
powerful
switches.
F
I
would
say
that
it's
an
interesting
approach,
many
interesting
abilities,
but
the
systems
that
we
have
seen
so
far
I
think
it's
fair
to
say
whether
point
solution
that
assumed
well,
you
can
basically
intercept
the
packets,
their
work
with
the
match,
action,
logic
and
back
program.
The
application
semantics
in
languages
like
p4.
This
is
a
fairly
limited
environment
and
also
has
kind
of
strong
assumptions
or
security.
I
guess
would
be
difficult
to
do.
F
So
in
this
group
here,
so
we
think
we
have
like
two
directions
that
we
have
been
discussing
so
far.
So,
coming
down
from
these
distributed
computing
systems
in
the
application
layer
and
trying
to
see
how
we
can
maybe
integrate
those
things
with
the
network
and
then
say
p4x
approach,
for
example-
maybe
try
to
move
up
the
stack
and
try
to
see
what
we
can
do
for
for
applications,
so
just
to
enough
give
us
a
handle
to
kind
of
talk
about
a
few
categories.
F
So
here's
one
example
that
kind
of
comes
from
current
work
that
we
have
been
doing
for
enable
computing
in
a
group
of,
say,
ICN.
Folks,
so
assume
you
have
a
kind
of
network
of
nodes
that
can
generally
offer
compute
services.
So
this
could
run
execution
platforms
or
any
kind
of
platform
that
could
run
some
functions.
F
So
we
assumed
that
we
would
be
like
diagnostic
to
the
specific
environment.
We
just
be
able
to
call
some
functions
or
two
in
each
to
create
some
state
and
of
course
you
want
to
be
able
to
leverage
specific
features.
So
like
a
GPU
here
or
a
trusted
execution
environment
there,
but
in
general
we
assume
could
be
a
heterogeneous
system
and
so
you'd
be
able
to
use
whatever
you
need
for
your
application
and
so
in
a
dissipated
application
context.
There
could
be
something
like
sessions
or
like
an
application
in
in
progress.
F
F
F
So
in
that
system,
it's
useful
to
have
information
about
so
where
those
functions
are.
So
if,
for
example,
if
I
have
a
stateful
act
or
I
want
to
continue
talking
to
that
one,
it's
also
useful
to
know.
So
what
is
the
resource
utilization
situation?
So
how
busy
is
this
server,
for
example,
or
how
well
is
it
performing
right
now?
Maybe
it's
actually
formally
not
loaded,
but
I
just
figure
out.
F
Okay,
so
in
this
system
we
we
we
distribute
this
information
by
some
distributed
protocol.
You
don't
think
about
using
Network
mechanisms
like
the
routing
system
to
at
least
partially
help
you
with
that
or
to
use
information
that
you
get
from,
say,
transport
protocol
like
protocols
that
are
you,
okay,
this
is
perhaps
has
a
longer
latency
right.
You
have
some
congestion
information
in
there
protocols,
okay,
so
that's
just
an
example.
To
just
give
you
someone
to
type
some
mind
shape,
we
can
can
talk
about
some
some
mechanisms.
F
F
So
where
does
it
come
from?
Obviously
there
would
be
something
like
input
parameters
so
an
image
to
possess.
There
would
be
some
like
an
operational
context,
so
for
stateful
service
they
would
also
be
something.
We
think
that
so
yeah
background
data
that
could
be
kept
in
a
key
value
store
or
some
database,
which
could
be
modeled
as
a
stateful
actor,
for
example,
and-
and
you
could
think
about
how
do
you
know,
specify
or
use
those
parameters?
D
F
But
also,
what's
the
what's,
a
sensible
data
unit,
what
this
function,
but
this
functions
to
be
you
work
on
so
packet
processing,
so
it's
in-service
training
perhaps
could
be
useful,
but
may
not
be
the
the
ultimate
goal
in
the
end.
So
because
there
are
solutions
for
that
and
think
about
doing
something
for
disability
applications.
We
think
it's
more
useful
to
talk
about
application
data
units
and
think
about
yeah,
some
some
transport,
a
section
that
allows
us
to
convey
these
ad
use
from
one
function
to
the
other,
for
example,
and
yeah.
F
F
So
this
could
be
persistent
side
effects
or
updates
to
to
to
database
states,
for
example,
or
could
be
more
temporary
integrity
effects.
So
there's
also
categories
that
we
suggest
you
to
look
into
deeper
a
little
bit.
I'm
not
doing
this,
then
yes,
often
so
in
our
system
designed
the
question.
So
what
our
application
semantics
in
terms
of
how
to
get
actually
get
data,
so
is
it
like,
like
a
pool,
abstraction
or
a
push
abstraction
so
pool,
would
require
something
like
a
repress
response
for
the
color.
F
So
it's
easy
to
say
that
we
want
to
have
a
general-purpose
disability,
computing
platform
but
I
think
for
say,
meaningful
applications.
You
also
really
have
to
think
about
performance,
so
distributed
data
analytics,
okay,
I'm
almost
done
for
distribute
data
analytics.
You
could
imagine
having
some
some
pipelines
that
you
know
feed
the
data
into
functions,
and
so
each
function
is
depending
on
the
input
data
that
the
other
function
generates.
F
So
I
mean
this
has
to
be
well
designed
to
to
provide
any
useful
performance,
and
then
things
like
how
can
I
reuse
data
and
the
system
things
like
caching
and
so
on.
Okay,
let
me
go
over
this
a
bit
right
now,
so
we
have
more
info
on
this
in
the
draft,
so
the
coin
so
I
mean.
First
of
all,
we
are
the
internet
research
task
force,
so
I
think
in
general.
We
think
in
general,
it's
healthy
to
think
about
open
networking
environments.
F
So
let's
not
make
too
many
assumptions
on
how
shielded
our
systems
on
how
trustworthy,
either
the
PS
are
and
and
so
on.
So
in
general,
it's
a
hostile
environment
where
you
have
to
run
function
on
and
generally
untrusted
systems,
and
we
have
to
find
way
to
establish
trust,
for
example,
so
that
your
code
runs
correctly
and
and
and
so
on
so
heterogeneous
system,
things,
security
from
the
beginning
and
so
on.
F
So
one
idea
that
we
had
is,
it
may
seem
attractive
to
you
know:
categorize
the
work
in
different
use
cases
or
scenarios
like
industrial
IOT,
but
actually
we
think
that
it's
actually
not
that
useful,
because
in
this
example,
here
industrial
energy,
they
are
I
guess
like
with
different
economic
classes
of
how
network
computing
could
be
used.
So
there
is,
for
example,
that
could
be
done
like
a
virtual
PLC
that
runs
somewhere.
That's
just
virtual
machines,
not
not
that
interesting,
but
they
could
be
also
like
this
with
the
data
processing
or
some
like
real-time
control.
F
So
we
think
it's
more
interesting
to
dually
categorize,
these,
these
functional
properties
and
interaction,
types
and
these
kind
of
things
and
yeah.
So
our
suggestion
for
say,
if
you
would
did
some
like
you
know,
experimental
protocol
or
something
things
to
agree
on,
we
think
would
be
in
the
model
for
women
method
invocation
which
types
of
functions
are
we
talking
about?
What
is
the
programming
model
not
so
much
the
different
bindings?
How
do
you
ascribe
resources
and
how
you
allocate
resources?
F
A
H
I
J
But
in
any
case,
you
will
need
to
be
to
orchestrate
your
network
to
be
sure,
but
the
set
of
functions
are
placed
on
on
your
networking.
Make
sure
that
were
wide
data
flow
goes
through
a
wide
set
of
functions
to
perform
more
complex
functions,
so
in
the
service,
functional
training
is
achieved
with
some
central
control
point
color
orchestrate
is
called
an
Orchestrator
which
has
a
some
limitation,
since
it
builds
a
single
point
of
failure
in
the
networks
as
some
scalability
issue
and
it
sadly
in
our
world
with
legacy
device.
J
J
So
instead
of
having
some
centralized
control
point
which
make
you
manage
every
flow
in
your
networks
and
your
functions,
we
propose
to
instead
of
autonomous
nodes,
which
asked
network
functions
and
also
have
a
distributed
intelligence
to
traffic,
for
a
channel
or
functions
and
to
choose
two
instances,
or
they
are
till
event.
So
I
will
fulfill
my
no
negative
cultural
works.
You
have
some
gateways
which
are
attached
on
which
some
networks
are
attached
by
a
extension
and
some
information
and
based
on
that
they
build
a
network
view.
We've
we've
and
we
visit
neutral
shoe.
J
They
are
built
routing
tables.
What
we
propose
is
that
we
can
bind
specific
products
to
each
tag,
to
each
type
of
functions
through
to
make
sure
that
the
Vinod's
know
where
functions
are
allocated.
Moreover,
if
you
use
some
any
case,
a
dressings
you,
you
are
able
to
match
not
not
only
a
prefix
to
say,
go
functions,
but
also
to
map
to
introduce
a
GP
matrix
to
state
function.
J
For
instance,
you,
if
you
you
take
these
networks
with
some
classical
hotels,
running
whichever
for
the
wrong
one
and
some
some
hotels
which
also
functions
this.
This
mounted
rotors,
which
we
comes
anyway
rotors-
will
announce
specific
traffic's
for
any
function.
Fails,
for
instance,
if
you
consider
that
your
yellow
functions
is
an
IDs
and
pink
one
is
a
firewalls.
J
You
will
have
extended
view,
which
is
on
white
and
based
on
that,
you
will
be
able
to
hood
first
first
flow,
which
is
for
white,
which
is
the
red
one
to
make
sure
that
he
goes
through
the
ideas
and
then
the
firewalls.
And
if
you,
you
base
your
matrix,
for
instance,
on
the
CPUs,
perceptive
use
of
EITS
freeways
and
if
a
second
for
wives,
it
will
be
able
to
through
the
second.
Second
second
instance,
making
is
some
not
balancing
between
the
different
function
instance.
J
So
we
have
built
an
architecture
to
for
our
organization
nodes
which
we
call
an
elevator.
So
you
have
a
control
part
which
is
distributing
manual
on
the
white
and
which
receive
a
higher
level
policy,
such
as
the
mapping
of
specific
functions
ribbon
and
gas
prefix
and
which
will
constantly
monitor
both
from
the
function
is
asked
and
based
on
bats,
will
compute
the
matrices
annals
on
a
GP
to
to
make
behavior
over
Hooters
aware
but
of
the
functions
and
what
is
its
from
user.
J
Control
a
logical
path
will
take
when
the
floating
an
algorithm
will
take
the
or
Monti
view
shadow
on
the
on
jgp
use.
The
bathroom
stand,
the
bathroom,
Station
algorithm
and
build
functions
service
table
which
will
map
the
location
of
the
other
functions
and
push
it
through
over
connector,
which
is
a
path
which
will
link
the
IP
networks
and
the
anvil
functions
and
based
on
the
the
no
caps
relations
we
which
I
what
we
have
used,
which
is
an
SH
which
have
been
normalized
at
yet.
J
Yet
you
will
be
able
to
map
the
next
function
to
through
which
we've
got
a
bill.
Okay
occasions
your
packet
as
to
earth,
so
we
make
a
simple
implementation
of
these
photos
of
the
virtual
network.
Functions
are
simply
functions
in
in
network
namespace.
We
build
our
connector
which,
before
to
make
it
became,
we
didn't
choose
up
an
open
the
switch
because
it
didn't
have
a
stateful
memory.
So
we
use
people
to
to
implement
our
connector
to
make
sure
that
each
flow
will
be
processed
by
by
the
same.
J
So
we
have
make
some
made
some
I
large-scale
experiments
with
this
implementation.
We
package
our
anabolic
state
containers
and
we
emulated
8787
not
to
topologies,
which
is
the
top
rod,
nicely
infertility
we
make
by
working
full
project
and
on
top
on
top
of
it
approaches.
We
deployed
10
virtual
and
network
functions,
and
we
ask-
and
we
configure
on
our
network
to
make
sure
that
those
first
function,
which
is
were
a
yellow
one
and
the
second,
which
would
be
I.
J
Think
if
there
is
a
fiber
instant
instance
of
each
of
each
type
of
network
functions
and
routing
proper
receiver
is
the
shortest
path
and
each
and
the
virus
takes
about
a
by
uprooting
the
decision.
And
what
we
show
is
that,
based
on
the
link
stated
participant
update
frequency,
we
are
able
to
have
a
stable
balancing
on
the
different
vnf
instances
running
on
our
unavailable.
J
So
what
we
indeed
achieve
is
that
we
have
a
free
distributed,
a
framework
to
change
in
network
functions
which
is
not
only
interoperable
with
current
voting
system
and
which
brings
resilience
in
scalability
since
jgpz
feel
proven
portable,
and
we
achieved
through
a
balance.
The
load
among
the
validity,
from
the
edge
from
the
network
function
instance,
and
we
don't
have
to
add
any
configuration
when
we
add
a
new
virtual
network
function
instance
since
when
you
another
would
have
chose
to
to
start
a
new
instance.
J
You
just
have
to
announce
the
anycast
prefix
related
to
this
event
after
that,
since
it
doesn't
instance
a
kid
to
make
routing
system
aware
of
the
new
host
for
simple
networks
and
in
future
work.
We
would
like
to
see
if
we
can
extend
our
proposal
to
inter
domain
service
project
in
provisioning,
using
pi
ability
and
to
study
the
different
metrics,
which
could
be
used
to
take
routing
decision
for
for
virtual
service
functions.
J
Moreover,
or
some
some
classical
IDP
madness
and
failure,
techniques
code
could
be
applied
to
our
storage
solutions
and
what
the
next
step
of
our
works
will
be
to
a
check
if,
based
on
the
augments
topology,
we
build
of
another
Hooters
would
be
able
to
take
autonomous
decision
to
choose
to
start
or
stop
and
in
instances,
in
an
instance
based
on
a
lot
of
the
networks.
So
if
you
have
any
question.
K
Just
Answer,
okay,
so
I
was
surprised
to
see
us
be
atheist.
Here,
mixing
preach
ability
in
policies
in
the
world
possibility
in
general,
the
reasonably
stable
work
called
BGP
furnace
age.
That's
progressing
in
best
working
group
in
ATF,
and
it
has
BGP,
has
built-in
constructor
router
get
important
expert
to
impose
policies,
so
it
would
have
been
much
easier
to
implement
and
inter-domain
would
work
just
as
if
there's
maybe
something
to
consider.
Okay,
thanks.
L
J
We
have
done
for
now
is
to
take
a
bio
decision
you're
on
English,
otters,
Babar
golem.
Nonetheless,
if
you
have
the
Augmented
view
you
can
use,
in
my
opinion,
any
a
path
computation
algorithm
to
take
more
complex
decision
if
needed-
and
you
could
also
instead
of
make
up
by
outputting-
makes
forceful
thing
using,
for
instance,
as
a
services
to
serve
a
six
to
do
that.
M
J
J
Yeah,
but
if
you
start,
we
have
a
in
our
first
paper,
we
have
made
some
experiment
for
all
balancing
scenario
where
you
have
a
instance.
True,
visceral
natural
function
learned
at
first
and
we
do
choose
to
start
off
a
third
one
to
change
the
balance
between
this
one
little
function
instances.
So
it
is
possible
to
to
modify
the
traffic
engineering
doing
doing
that.
O
P
P
P
As
you
can
see,
there
are
quite
a
lot
of
different
robots
working,
a
lot
of
sensors
measuring
something,
and
everything
is
now
more
and
more
connected,
and
what
the
mechanical
engineers
tell
us
is
that
they
want
to
essentially
move
all
the
data
that
they
collect
there
in
to
etch
or
remote
clouds,
and
they
will
also
want
to
control
the
different
robots,
also
from
flaut
and
yeah,
essentially
as
a
third
aspect,
all
the
data
that
they
moved
to
the
cloud.
They
want
to
use
that
and
actually
throw
machine
learning
and
data
mining
on
it
to
later.
P
We've
then
taking
a
look
at
how
we
can
improve
that
with
in
network
computing,
and
you
can
see
here
the
general
abstraction
from
the
previous
figure,
namely
that
we
have
sensors
that
we
have
actuators
on
the
on
the
left-hand
side.
So
one
end
of
the
of
the
communication
there
on
the
right
side,
you
see
the
edge
clouds
and
the
remote
clouds,
which
will
then
be
of
the
other
end
of
the
communication
and
in
the
middle
we
have
the
the
network,
and
what
we
now
propose
is
that
we
could
place
certain
functions
within
the
network.
P
So
in
the
draft,
we
then
proposed
quite
a
lot
of
research
questions
for
all
the
for
different
use.
Cases
and
I've
tried
to
condense
them
here
into
two
categories.
The
first
one
is
the
design
and
development
of
the
network
functions
that
we
want
to
do
so
here.
The
question
is
how
we
can
account
for
the
limited
computational
capabilities
of
the
network
devices,
so
we
cannot
solve,
for
example,
in
the
control
scenario,
you
typically
have
quite
complex
control,
loops
and
control
algorithms,
and
so
it's
the
building.
P
These
simplified
versions
of
them
might
introduce
some
science,
some
sort
of
inaccuracy,
and
we
have
to
account
for
that
as
well,
and
then
is
the
second
aspect.
So
now
we
can
build
them
the
different
network
functions.
It
might
be
a
good
idea
to
find
out
how
we
can,
for
example,
provide
basic
building
blocks
for
the
functions
so
that
they
can
then
be
easier
combined
or
our
new
network
cars
can
be
built
easier.
P
The
second
big
point
is,
then,
the
operation
and
the
deployment
of
the
network
functions.
So
now
we
have
the
functions,
but
how
do
we
place
them?
Where
do
we
place
them
and
how
do
we
coordinate
them?
So,
for
example,
if
we
have
two
functions
in
the
network
and
we
replace
the
first
one,
how
does
that
affect
the
second
one,
for
example,
and
the
second
aspect
we
done
also
see
that
the
these
Network
pressures
will
certainly
affect
the
the
functions
that
are
then
from
computed
at
the
applications.
P
In
the
end,
and
here
we
have
to
yeah
consider
how
this
will
then
how
we
can
define
the
interaction
between
the
in
the
network,
parts
and
the
applications.
So
what
is
then
next
for
us?
We
plan
on
updating
this
in
our
use
case
draft,
and
then
we
especially
want
to
do
another
draft
or
we
plan
on
doing
another
draft
on
transport
protocol
issues.
So
this
was
done.
P
Basically,
what
I
stated
last
on
the
slide
before
so
the
interaction
between
the
end
host
and
the
network
function
in
the
middle,
because,
if
we
simply
come,
for
example,
combine
the
two
to
sensoric
values
into
a
third
one
and
then
only
only
one,
only
one
packets
packet
still
survives.
Maybe
this
violates
kind
of
D
and
2n
principle,
and
we
want
to
yeah.
We
wanna
elaborate
on
what
we
think
might
be
the
problems
here
and
we
wanna
yeah
find
out
what
we
can
do
about
that
or
what
essential
requirements
for
such
a
protocol
would
be.
H
P
A
Are
there
any
questions?
I
actually
have
one
without
my
chair,
actually
I've
got
in
the
Hat,
I
I
think
there's
a
lot
of
very
good
opportunities
in
what
you
mentioned,
but
I
read
the
draft
also
I
think
what
is
missing
is
making
a
strong
differentiation
of
what
is
already
existing
in
automated
industrial
environments
which,
as
you
know,
a
pretty
advanced
and
what
you're,
bringing
with
this
architecture
and
I,
don't
think
it's
clear
in
the
in
the
draft
and
I.
A
Don't
think
it's
clear
in
the
presentation
either,
because
if
I
look
at
this
with
a
very
cynical
hat
on-
oh
so
you
know
they
already
do
that
in
highly
automated
production.
But
I
actually
know
from
other
stuff
that
there
are
new
cases
and
they
are
places
in
industrial
environments
where
this
type
of
work,
that
your
suggestion
has
a
lot
of
work
of
room.
So
I
would
suggest
that
in
the
next
version
of
your
draft
that
you
clearly
identify
what
is
the
difference
between
what
your
suggestion
and
what
is
currently
happening
in
automation,
yeah.
K
K
Next
step,
if
draft
Transfer
Protocol
yeah,
do
you
need
something
new?
Is
there
anything
you
think
in
existing
transfer
protocols.
P
So
what
I,
basically
meant
with
that
is
this
violation
of
the
internet
principles.
So
typically,
so
we
don't
think
that
it
will
work
with
the
existing
protocols,
or
at
least
not
out
of
the
box,
for
so
that
we
so
the
end
host
somehow
have
to
know
that
someone
in
the
middle
is
interfering
with
what
they
are
doing,
and
this
is
actually
what
I
meant.
With
with
this
draft
on
the
transfer
protocol
sometimes
like.
P
K
Q
H
Q
Alright,
so
it's
okay,
no
hear
me:
okay,
so
I'm
Jen,
Jen,
Jen
I'm,
an
assistant
professor
University
of
Chicago.
This
is
actually
the
first
time
I'm
in
a
RTF,
and
it's
very
happy
to
be
here
to
share
my
research
and
hopefully,
at
the
end,
I
get
to
connect
my
research
to
the
agenda
of
coin
artery.
So
if
I
mean
unless
you
live
in
like
another
Rock,
you
know
that
video
analytics
is
everywhere
right
from
public
transportation,
to
public
safety.
Q
Q
All
right,
so,
first
of
all
did
there
are
two
trends
in
video
analytics
one,
and
these
neural
network
models
are
getting
more
and
more
accurate
at
the
cost
of
more
and
more
computing
resource.
Okay,
so
it's
getting
more
and
more
expensive
to
run
these
models
and
second
there's
just
a
lot
more
cameras.
So
these
two
trends
are
kind
of
firm,
as
have
led
to
this
dramatic
growth
of
the
cost
in
video
analytics.
Okay,
now,
just
to
put
that
into
perspective,
this
is
not
just
about
compute
storage.
It's
also
about
networking.
Q
Okay,
now,
to
put
that
into
perspective.
Think
about
video
streaming
internet
streaming,
videos
like
Netflix
or
YouTube
right.
These
are
the
kind
of
application
that
can't
account
for
80%
of
the
internet
traffic
to
consumers.
Now
imagine
all
the
network
systems
these
days
like
CDNs
or
cloud
they
are
all
built
around
these
traditional
applications.
Now
the
fact
is,
these
I
mean
at
least
surveillance
cameras.
Videos
have
already
exceed
the
traffic
of
Internet
video
in
that
streaming.
Video
and
the
fact
that
these
cameras
I
mean
analyzing.
One
camera
feed
is
way
more
expensive
than
just
streaming
one.
Q
Ok,
so
that
means
there
is
the
current
net
in
the
today's
internet
systems
is
way
less
than
was
needed
to
analyze.
All
these
video
feeds,
ok,
so
clearly,
there's
something
needed
to
increase
the
accuracy
of
these
analytics
with
low
cost,
and
do
so
in
a
way
that
can
scale
to
the
sheer
number
of
cameras
and
video
feeds.
Ok,
now,
just
to
give
you
a
sense
of
what
video
analytics
systems
not
like
you
have
a
camera,
capturing
video
and
the
video
can
be
analyzed
locally
or
can
be
sent
to
the
server
for
analytics.
Q
Ok,
so
naturally
you
can
imagine,
there's
that
kind
of
edge
to
cloud
continuous,
a
continuum
where
the
high
resolution
videos
will
be
analyzed
locally,
ok
and
then
only
the
part
of
the
video
that
needs
further
investigation
will
be
sent
upstream
to
some
more
complex
model,
typically
running
on
a
cluster,
ok,
and
then
it
would
do
some
future
further
futuring
and
the
remaining
video
would
be
only
the
remaining
video
will
be
sent
back
upstream
to
a
very
complex
model
in
the
cloud.
Ok.
Q
So
now
so
far,
all
I
have
to
talk
about
is
very
similar
to
everything
you,
you
have
singing
edge
computing
right,
but
there
are
actually
two
unique
properties
of
video
analytics.
The
first
property
is
that
the
video
pipelines
must
be
very
adaptive
to
the
real-time
video
content.
Now
you
know
just
what
that
means
is
when
the
video
content
changed
over
time
right.
The
resource
demand
for
this
video
analysis
pipelines,
which
will
vary
dramatically
over
time
as
well.
That
makes
resource
provisioning
very
challenging.
Q
Q
Well
imagine
this
is
one
of
the
like
very
typical
video
analysis
pipelines
out
there
right
and
there
are
not
a
lot
of
knobs
control
configuration
knobs
that
you
can
tune,
so
the
video
in
the
frames
of
a
video
gets
fed
into
this
module
in
am
you
know
that
resizes
the
video
and
then
select
which
frames
to
sample
and
then
it
will
get
in
fed
into
this
neural
network,
object,
detection,
software?
Okay,
so
you
can
see
there
bunch
of
knobs.
Q
You
can
choose
right,
so
you
can
change
the
resolution,
the
framerate
and
even
which
neural
network
object
data
model
to
use
okay,
so
so
people
have
been
trying
to
exploiting
this
color
for
these
knobs
to
customize
the
pipeline
to
the
video
content.
Now
I
was
just
informed
that
this
this
video
is
not
gonna
play
so
I'm
gonna,
just
ask
you
to
imagine:
what's
gonna
happen
here,
so
let's
say
this
is
actually
two
same
video
on
the
left
hand
side
you
see
these
bounding
boxes.
Q
These
are
boxes
detected
cars
detected
by
the
neural
network
and
if
I
play
the
video,
you
will
see
these
cars
gonna
be
staying.
Staying
staying,
put,
I
mean
they
are
stopping
there,
and
so,
if
you
use
low
framerate
on
the
Left
versus
high
frame
rate
on
the
right,
their
accuracy
will
be
pretty
similar.
Okay,
so
when
the
objects
are
pretty
static,
low
framerate
operation
would
be
in.
Q
It
will
be
enough,
it's
not
much
going
on,
but
if
you,
if
I,
were
to
click
the
play
button
again
right,
you
will
see
there
are
cars
moving
into
the
picture
and
running
in
high
speed,
and
if
you,
if
you
have
objects
moving
at
high
speed,
the
low
framerate
will
give
very
low
accuracy
because
they
were
not,
it
will
lose
track
of
these
objects.
Okay,
so
what
it
means
is
I
mean
that
gives
us
or
a
key
insight.
That
is,
the
video
analytics
pipeline
must
be
customized
to
the
video
content.
Q
The
real-time
video
content
so
basic
as
the
video
content
varies
over
time
like,
for
example,
speed
changes.
The
best
configuration
will
vary
over
time
as
well.
Now
this
is
not
just
about
framerate
I,
just
used
I'm,
just
using
framerate
as
one
example
resolution
and
the
underlying
neural
network.
Lassa
fire
should
also
be
changed
depending
on
how
and
how
frequent
the
content
changes
now
all
prior
work
has
been
doing
is
like
one
like
so-called
one-time
profiling.
Q
They
profile
the
video
at
up
front
and
then
stick
to
the
configuration
they
think
is
the
best
through
the
remaining
of
the
video.
What
we're
proposing
in
this
recent
work
is
try
to
try
to
to
argue
that,
instead
of
just
using
one
configuration
that
seems
to
be
good,
we
should
adapt
the
video
pipeline
ok
over
time
to
the
dynamic
video
content.
Ok,
so
this
is
the
new
idea.
We
we
studied
this
idea
in
in
a
recent
paper.
Q
So
what
is
main
architectural
eh?
Okay?
So
what
it
may
is.
You're
gonna
have
this
controller
in
the
middle.
That's
basically
sitting
in
this
continuous
loop
right,
it
will
periodically
update
periodically,
prefer
a
profile.
The
video
and
update
the
control
knobs
you
use
like
what
frame
rate
or
what
model
you
should
use.
Okay
and
you're
gonna
run
this
continuously.
So
I'm
gonna
skip
a
lot
of
details
here,
but
the
the
option
is,
you
can
kind
of
achieve
a
lot
of
resource
saving
or
a
lot
of
accuracy
improvement
right.
Q
So
this
is
like
one
traffic
video
data
set
and
you
can
see
on
this.
The
blue
ones
are
profiling.
Just
once
upfront,
okay,
you
don't
change
the
configuration
and
red
ones
is
the
proposed
method,
you're
continuously
reprofiling
and
updating
the
configuration,
and
it
may
just
be
able
to
claim
this
right
like
higher
accuracy
at
the
same
cost
or
you
can
achieve
the
same
cost
at
a
very
small
fraction
of
the
other
scene
accuracy.
It
is
at
a
very
small
fraction
of
the
computing
cost.
Okay.
Q
Now
this
is
just
research,
but
what
what
it
tells
us
is.
Even
this
is
good
news.
This
actually
make
the
resource
allocation
very
challenging
right,
because
now,
whenever
the
content
changes,
you
have
to
change
your
resource
allocation,
and
this
is
actually
the
plot
of
the
resource
demand
of
the
system
over
time.
Okay
and
you
can
see
the
resource,
consumption
kind
of
changes
by
5
to
20
X,
even
just
in
a
few
seconds
just
because
the
content
changes
rapidly
right,
and
this
basically
raises
the
challenge.
Q
How
do
you
actually
do
resource
allocation
in
our
computing
right
to
cope
with
this
kind
of
a
spiky
workload?
Okay,
so
that's
one
takeaway
from
the
first
part
of
research
now
for
the
remaining
couple
minutes.
That
would
be
just
briefly
talk
about
the
second
unique
challenge:
unique
property
of
video
analytics.
Oh
yes,
exactly!
Ok,
so
what
that
means
is
ok.
Q
So,
let's
imagine
again,
you
have
a
network
connecting
a
camera
and
a
cloud
and
the
camera
doesn't
have
local
resource
to
process
the
video
typically
right,
so
the
video
has
to
be
streamed
out
to
the
server
now.
What
what
traditionally?
What
people
do
is
they
can
look
at
the
video
and
just
compress
it
to
some
quality
level
and
then
send
the
video
back
to
the
server?
This
has
two
problems:
either
you
set
your
encoded
video
into
low
quality
it
loose
accuracy.
Q
You
can't
see
anything
it
or
you're,
sending
high
quality
it's
good
to
for
accuracy,
but
you
may
not
have
enough
bandwidth
to
send
it
right
now.
The
fundamental
reason
for
this
actually
is
because
this
product
I
mean
the
traditional
video
streaming
protocol
makes
sense
to
not
to
use
the
feedback
from
the
user,
because,
for
traditional
video,
the
users
are
actually
human
being
right,
you
can
ask
them
to
give
you
experience
feedback,
but
in
video
analytics.
Actually,
you
can
because
the
consumer,
the
user
here
actually
logic,
they
send
an
algorithm
right.
Q
It
can
actually
pull
some
feedback
from
it,
and
this
is
what
we
do.
I
mean
this
is
what
we're
trying
to
do
and
this
animation
messed
up,
but
what's
basically
happening
here
is
you
can
kind
of
just
send
a
very
low
quality
threat
version
of
the
video
to
the
server
and
the
server
will
run
some
analytics
to
give
hint
to
recline
what
it
actually
needs
right?
Real-Time
paint!
So
that's
another
one,
but
that's
kind
of
a
new
way
to
do
this.
Q
Video
streaming
for
video
analytics,
okay,
and
this
can
actually
save
a
lot
of
bandwidth,
but
still
achieve
the
same
high
accuracy.
Okay,
so
the
takeaway
from
this
talk
just
to
take
away
from
this
talk.
One
is
that
the
pipeline,
the
video
pipeline,
must
be
adaptive
to
the
real-time
video
content.
Okay,
and
that
means
whenever
the
content
changes
right,
the
resource
demand
may
vary
as
well,
and
that
will
cause
some
very
spiky
workload
and
whatever
in
network
resource
allocation
on
network
allocation
resource
allocation
mechanism,
we're
proposing
must
be
able
to
cope
with
that.
Q
Okay
and
a
second
because
we're
dealing
with
this
video
analytics
is
dealing
with
some
algorithm
as
the
final
consumer
of
the
system
right.
You
can
actually
leverage
some
of
this
real-time
feedback
from
it,
and
this
actually
opens
up
new
opportunities
to
bring
the
goals
of
these
data
analytics
to
the
resource
allocation
control.
Loop,
okay,
and
thank
you
ready
for
castle.
Okay,.
C
So
I
guess
I'd
love
for
you
to
elaborate
on
what
you,
what
you
imagine
it
would
entail
to
bring
the
analytics
goals
to
the
control
loop.
Do
you
see
that
there
is
metadata
specific
to
these
kinds
of
Cascades?
They
could
be
exposed,
and
then
you
know
what
do
you
can
you
say
more
given
that
you're,
the
very
expert
yeah.
Q
Good
question,
so
the
question
is
how
you,
if
you
want
an
average,
the
server
or
the
analytics
logic
feedback
right,
so
it
will
be
better
to
have
kind
of
standardized
version
of
metadata
to
to
generalize
what
I
mean
I
mean
most
kind
of
feedback.
You
would
be
used
that
they'll
be
useful,
so
I
think
that
this
is
very
much
in
the
early
stage
of
research
that
has
been
several
papers
along
this
line.
One
thing,
that's
really
interesting
is
so-called
you
make
some
assumptions
about
what
the
Khan
tenth
or
the
video
I
mean.
Q
Q
So,
for
example,
if
it
looks
that
looking
at
very
low
quality
video,
it
sees
these
objects.
This
just
appears
once
and
disappear
all
of
a
sudden
that
doesn't
make
a
lot
of
sense
right.
So
that's
kind
of
anomaly
is
one
thing:
that's
really
standout
in
across
several
papers,
I'm
just
trying
to
really
generalize
that
I
mean.
Obviously
they
have
very
different
mechanisms
to
solve
that.
But
anomaly
in
the
thing
the
result
is
one
interesting
result.
Q
Interesting
sink
you
look
at
the
other
thing
is
so-called
confidence,
so
confidence
as
in
so
this
neural
network
usually
have
some
scores
attached
to
each
each
detection
and
scores
being
high
means
it's
very
confident
that
this
is
actually
a
thing.
If
it's
low.
That
means
this
McClure.
It
doesn't
mean
anything,
but
you
can
treat
it
as
it's
not
being
very
confident.
Okay,
so
that
score
is
one
thing
also
useful
as
a
feedback
for
you
to
pull
more.
You
use
well
information
from
their.
I
Q
R
R
Q
K
Q
Usually,
a
wasting
a
range
of
certain
value,
I
mean
low
and
high,
didn't
what
won't
have
a
difference
of
more
than
10x
now
that
has
a
reason
that
has
an
actual
underlying
reason
that
it's
because
the
neural
network,
these
days
assumes
certain
input
size.
So,
even
if
you
give
a
very,
very
high
resolution
video,
it
will
resize
into
something
smaller
as
well,
so
so
I'm
just
saying
the
magnitude
of
resolution
changing
that
I'm
seeing
is
not
really
the
full
scale
of
variance
that
you
will
be
seeing
if
you
have
better
neural
network,
does.
Q
Q
Think
most
of
the
working
this
space
is
not
trying
to
figure
out
the
optimal
number
of
layers
between
the
camera
and
the
cloud
they're
trying
to
say.
If
you
have
this
number
of
layers,
how
do
I
layout
I
mean?
How
do
I
spread
analytics
across
these
layers?
I
think
that's
actually
a
real
good
question
for
from
this
print,
this
group
specific
perspective
to
investigate.
S
Could
you
put
up
the
spiky
little
graph
again
so
I'm
trying
to
read
the
the
x-axis
is
kind
of
compressed,
so
how
many?
How
much
time
is
between
the
bottom
and
and
and
any
given
peak,
is
that
seconds
tenths
of
seconds
I
guys
I
can't
really
see
this
yeah.
Well,
let
me
let
me
sort
of
you
really
get
to
the
question.
I
want
to
ask
how
many
RT
T's
do
I
have
before
I
see
the
spike
before
I
know
that
that's?
Why
is
big
so.
Q
Q
Right
but
but
but
I'm
just
saying
in
Syria
can
catch
that
right
right
up
front,
but
you
do
need
several
iterations
to
look
like
really
coverage
and
after
that
I
mean
resource
allocation
is
not
free
right.
So
it's
not
instantaneous,
so
you
kind
of
need
to
take
that
into
a
country.
Maybe
that's
x-ray
was
taking
most
of
the
time
and.
S
Q
S
Well,
well
then,
the
interesting
question
is,
do
you
need
the
model
everywhere
and
can
you
get
the
model
in
right.
Q
G
Sure
on
UNIX
are:
do
you
consider
how
urgent
is
it
to
understand
what
the
camera
sees
like?
If
it's
really
urgent,
then
the
neural
network
should
be
on
the
camera
and
then,
if
it
doesn't
understand,
then
only
frames
on
demand
frames,
not
the
whole
video
cuz
yeah.
Q
Q
D
Q
Everything
is
very,
must
be
very
super,
real-time,
alright,
so
in
those
kind
of
situation
you
do
want
everything
local
versus,
if
you
have
a
bunch
of
surveillance,
cameras
or
a
camera
network
where
you
have
like
thousands
of
cameras
cheaper,
want
cheap,
much
cheaper
than
the
one
you
are.
You
have
on
self-driving
cars
and
then
in
those
kind
of
situations
you
need
to
have
a
back-end
Network
system
to
analyze
the
data
and
but
actually
you're
right.
Q
In
those
cases
we
are
most
people
are
assuming
they
are
not
urgent
as
urgent
I
mean
as
real-time
as
self-driving
cars,
so
they
can
tolerate
certain
level
of
delay.
Yes,
okay,
so
but
but
you're
a
maybe
in
the
worst
case
of
I,
mean
the
worst
version
of
most
words
right.
You
have
a
lot
of
cameras,
cheap
ones,
and
you
need
super
real-time
reaction.
That's
the
kind
of
holy
grail
of
this
kind
of
systems.
G
There
has
to
be
layering
because
the
car
itself
is
always
going
to
have
some
kind
of
mutation
versus
so,
but
what
you
can
consider
is
a
neural
net
on
the
on
the
camera.
If
it's
a
movie,
camera
cannot
expect.
What's
going
to
happen,
somebody
may
jump
the
kid
the
ball
and
then
in
the
edge.
If
you
need
sub
ten
millisecond
and
then
in
the
cloud,
if
you
need
something.
A
E
Very
good,
okay,
so,
let's
start
so,
my
name
is
Jesper
Eric,
stone
and
I'm,
a
VP
of
Product
Management,
the
co-founder
of
of
Nova
flow
and
and
I
have
logged
on
to
this
session,
using
Marc
Leclerc
registration
credentials.
He
is
a
dear
colleague
of
mine,
me,
pure
marketing
strategy
and
also
co-founder,
and
to
do
this
I
had
to
promise
him
not
to
use
any
course
language
or
get
him
into
any
trouble
whatsoever.
So
you
see
Marc,
but
it's
really
just
for
Erikson.
E
E
E
E
So
what
is
a
match
action
pipeline?
Well,
so
the
match
action
pipeline
resides
in
the
switch
silicon
inside
a
switch
or
router,
and
it's
really
the
embedment
embodiment
of
the
rules
by
which
we
want
to
process
the
packet
as
it
goes
through.
The
switch
and
in
most
switches,
you'll
find
a
fixed,
a
sink,
and
that's
really
what
you
see
on
the
left
and
an
in
a
fixed
executes,
a
fixed
set
of
mass
action
tables
defined
in
there.
In
the
silicon,
the
size
of
these
tables
is
fixed.
E
What
the
fields
that
you
can
match
on
is
predefined
and
the
actions
that
you
take
in
the
table
in
that
particular
table
is
also
fixed,
and
then
you
know,
as
an
application
programmer
trying
to
use
this
in
an
SDM
context.
You
know
I
would
have
to
try
and
map
my
application
into
this
fixed
max
match
action
pipeline.
So
it's
really
a
bottom.
What
we
call
a
bottoms
up,
programming
paradigm,
you
have
to
see
what's
in
the
silicon
and
that
really
drives
what
you
can
do
with
your
application
and
then
on
the
right.
E
You
see
a
programmable
silicon,
and
here
there's
no
prior
set
of
mass
action
tables
defined
in
the
silicon
and
the
application
programmer
creates
the
pipeline
to
specifically
meet
the
needs
of
the
application.
You
know
the
the
programmer
from
from
scratch
says:
I
need
this
many
tables.
This
is
the
size
of
the
various
size
of
the
tables,
the
types
of
the
tables
and
then
what
match
fields
and
actions
I
want
to
use
in
each
table,
and
it
really
allows
the
programmer
and
the
application
to
drive
the
packet
processing
pipeline
in
the
in
the
silicon.
E
So
who
cares?
You
know
what?
Why
is
this
important
and
and
well
in?
In
our
view,
you
know
this
program
will
match
action.
Pipeline
enables
the
following
thing:
the
first
one
is
faster
introduction
of
new
network
functionality
in
protocols,
and
you
know,
there's
an
endless
list
of
of
this
I
mentioned
a
couple
here.
You
know
ipv6,
I
AMSO
v6,
essentially
any
new
protocol
that's
introduced.
E
E
Another
key
point
is
that
this
allows
in
this
aggregation
of
network
hardware
and
software,
that
the
hardware
looks
more
like
a
server
and
then
can
be
sourced
differently.
You
know
and
and
then
this
software
is
really
what
defines
what
the
functionality
is.
And
then
the
third
bullet
here
is
features
are
defined
in
the
software
and
not
in
the
hardware,
and
what
what
that
drives
is:
there's
really
no
forced
obsolescence
of
the
networking
equipment,
as
you
can
upgrade
the
functionality
through
software
and
over
time.
E
So
it
has
a
lot
of
freedom.
The
programmer
can
say:
I
want
two
tables
I
want
to
match
on
this
in
the
first
table.
I
want
to
match
on
this
in
the
second
table,
I
want
to
highlight
the
metadata
field
as
an
interesting
feature
in
open
flow
and
and
also
in
p4.
It
allows
you
to
bring
the
result
from
one
table
to
the
next
and
match
on
it.
E
It
makes
like
this
so
here
you
see
a
real
implementation
of
a
true
open
flow
switch
and,
and
the
purpose
of
this
slide
is
really
to
illustrate
what
a
true
open
flow
switch
looks
like
and
and
switch
that's
compliant
with.
You
know
open
flow
1.4,
and
it
may
give
you
some
ideas
of
what
you
can
do
with
with
open
flow
and
from
a
network
computing
perspective
and
and
what
it
shows
is
it
defines.
E
So
a
good
example
here
is
the
Novi
flow
implementation,
where
you
have
up
to
1
million
flow
entries
in
at
ECAM
in
up
to
60
different
tables,
and
then
in
the
exact
match
use
case,
you
have
up
to
6
million
rows
or
flow
entries
in
up
to
60
tables,
so
the
programmer
can
really
application
program.
You
can
really
put
together
that
the
pipeline
using
these
primitives
that
then
supports
the
application.
E
This
was
a
switch
nos
that
allowed
you
to
use
the
barefoot
Tofino
switch
as
an
open
flow.
One
death
v
switch.
We
essentially
wrapped
open
flow
around
p4
and
allowed
the
user
to
create
and
run
an
open
flow
pipeline
on
it
to
FINA
white
box.
So
basically
we,
the
user,
saw
an
open
flow
switch,
but
internally
in
the
nas.
We
map
that
into
p4
code
that
get
compiled
and
push
into
the
silicon.
E
So
that
was
like
the
first
step.
So
when
you
look
at
the
P
for
P
for
runtime
match
action
pipeline,
the
P
for
part,
is
the
programming
language
that
is
used
to
define
how
a
switch
Silicon
process
to
packets.
You
can
define
the
parser,
which
is
you
know
what
kind
of
match
fields
you're
going
to
get
you
can
program.
E
The
actions
know
what
am
I
going
to
do
and
then
I
can
program
and
define
a
match
action
pipeline
tables
in
a
match
action
pipeline
and
then
the
p4
runtime
is
really
the
interface
from
an
external
p4
controller
or
internal
people,
controller
to
to
access
and
and
program
this
match
action
pipeline,
you
know,
add
flow
entries
or
initially
what
what
you
do
first
is
you
load
the
compiled
p4
program?
Then
you
can
add,
delete
flow
entries
in
the
match
action
tables.
A
E
A
E
So
this
slide
is
really
to
illustrate
the
extent
by
which
this,
the
parser
and
and
the
the
master
action
pipeline
and
the
actions
are
defined
in
software.
So
you
basically
define
an
Ethernet
header,
an
ipv4
header,
the
tables.
You
know
what
you
match
on
what
what
the
actions
are,
and
so
it's
a
the
software
definition
is
driven
all
the
way
down
to
2,
to
bare
bones
and
then
on.
E
So
comparing
open
flow
and
P
for
P
for
runtime,
both
of
them
give
you
a
programmable
math
section
pipeline.
However
P
for
P
for
runtime
gives
you
additional
freedoms,
as
you
can
program
the
parser
and
also
define
your
own
actions
in
open
flow.
You
know
you
can
do
some
of
that
through
experimenter
extensions,
where
the
developer
could
define
new
match,
feel
some
actions.
E
So
next
slide,
so
here's
a
slide
showing
at
the
components
of
a
Knauss
and
what
we've
been
talking
about
is
the
open
flow
and
the
p
for
runtime
and
and
in
that
part,
but
on
top
would
add
in
in
a
typical
notes.
You
also
have
you
know:
configuration
management,
operations,
management,
security
management
and
other
extensibility
is
so,
and
so
that
was
really
my
presentation.
A
S
P
for
people
multiple
times
and
I'm
curious,
if
you
have
potentially
a
more
satisfying
answer
to
this
question,
which
is
I,
have
two
programmers,
each
of
which
have
written
a
p4
program
and
they
don't
know
about
each
other.
What's
the
composition
model
for
composing
to
p4
programs
into
one
and
what's
the
model
for
how
those
two
programs
can
be
loaded
into
the
same
switch
and
what's
the
model
for
if
I
want
to
change
the
program
of
the
switch
without
resetting
the
switch.
A
Send
that
to
the
list
that
would
be
great
and
by
the
way,
for
those
who
got
here
very
late,
Jasper
and
France
lent
us
two
engineers.
That
probably
could
have
the
answer
to
your
question,
and
so
that
was
really
great.
Are
there
any
other
questions
at
the
hackathon?
Yes,
and
the
code
you
saw
was
similar
to
the
stuff
we
did
at
the
hackathon.
Any
other
questions.
S
N
Yes,
thanks
good,
very
good,
there's
a
thingy
somewhere,
quite
good,
think
I'm
going
to
be
timely,
and
so
this
draft
is
submitted
a
couple
of
weeks
ago
as
a
result
of
ongoing
work,
as
well
as
proposed
research.
A
lot
of
the
questions
we
are
asking.
We
have
ideas
of
answers,
but
real
ones
really
and
that's
why
we
thought
it
might
be
a
good
idea
to
actually
put
this
to
the
purse
research
group
because
we
feel
of
that
might
be
a
good
space
for
it.
N
The
book
is
launching
center
around
micro
services,
that's
what
we're
working
in,
but
we
have
a
slant
in
the
draft
around
what
we
call
it
abscent
fig
micro-services
and
the
reason
for
that
is
is
really
to
start
from.
You
know
the
app
economy
in
the
smartphone
world
and
look
at
smartphones
I
have
about
200
on
my
smartphone
and-
and
you
know
it
has
striven-
you
know
the
development
of
mobile
experience
as
we
launch
you
know
it.
We
use
applications
around
our
smart
on
our
smart
devices.
N
The
design
is
fairly
static,
they're
softer
models,
they're
packaged
into
an
application.
You
download
them
from
of
a
store
and
you're
done
with.
There
are
extended
client-server
interactions,
you
usually
see
within
these
apps,
but
that's
about
it
and
what
we
want
to
move
to
mentally,
at
least
as
a
thought
experiment,
and
we
have
first
demos
on
this-
is
to
mental
model,
where
we
look
at
an
app
as
a
collection
of
micro
services,
but
you
can
decompose
and
start
bouncing
around
in
the
network.
N
That's
you
know
why
we
call
apps,
so
you
decompose
a
nap
into
a
set
of
micro
services.
You
execute
them
on
one
or
more
distributed
resources
that
can
be
at
the
edge
of
the
network
and
the
cloud
wherever
they
are,
and
you
interpret
each
of
these
computers
as
a
pico
micro
data
center
and
that's
how
we
turn
a
term.
They
know
the
name
App
Center.
So
we
look.
We
don't
look
at
the
data
center,
we
said.
Well,
you
just
run
micro
apps
on
it
right.
That's.
N
What
comes
from
the
use
case
we've
demonstrated
and
it's
described
in
in
the
draft
is
mobile.
Function
are
floating,
so
it
said
this
runs
on
standard
Android.
We
do
Android
because
we
just
don't
program
iOS
for
reasons
of
resources,
really
we
wrap
helper
classes,
micro,
service,
helper
classes,
around
modules,
and
this
is
done
purely
at
design
time.
So
you
have
to
do
this
at
the
moment
at
least
we're
also
working
on
the
runtime
version.
Does
this
automatically
and
the
the
the
example
that's
described
in
the
draft?
N
Is
you
know
very
simple:
you
receive
an
image
you
processed
the
image
to
do
some
very
snapshot,
II
kind
of
thingy,
and
you
display
it's.
We
micro
services
makes
a
lot
of
sense
right
and-
and
now
we
interpret
a
given
experience
like
watching
that
more
video
as
a
chain
of
micro
services
that
you
know
perceive
process
this
by
and
you're
done
words
and
when
you
run
the
actual
application
now,
which
is
still
an
application,
is
being
installed
as
an
application
from
the
actual
Play
Store,
and
all
these
micro
services
run
on
your
device.
N
It's
just
as
it's
as
you
used
to,
because
it's
an
app
right,
but
what
you
can
do,
we've
done.
We
wrote
a
small
and
software
model
that
kills
the
modules
on
the
it
essentially
kills.
The
processes
and
what
you
see
because
they're
micro
services,
they
bounce
off
the
actual
device
and
and
run
in
the
network
as
in
network
computing
and
in
the
trophy
we
describe
a
policy
for
using
the
actual
ask
you
figure.
N
I,
should
have
probably
used
a
better
one,
and
we
realize
that
over
an
SD,
Sdn
infrastructure,
the
app
if
it
runs
it's
a
DPR
on
the
top
store,
stands
for
the
modules
you
know
despite
process
and
receive
you
run
them
like
that
they
run
on
on.
You
know,
that's
just
the
APIs,
you
see
everything
runs
on
your
mobile
and
we
have
a
very
simple
control:
UI,
no
intelligence
in
there.
N
The
intelligence
is
yourself
that
that
knocks
them
off
and
you
kill,
for
instance,
the
processing,
the
p1
of
the
the
micro
service
and
the
chain
is
missing
and
it
jumps
onto
a
processing
server
in
the
network.
The
processing
is
more
capable,
so
you
can
run
different
processing,
routines,
etc.
You
kill
the
D
and
it
runs
onto
it.
It
jumps
onto
a
smart
TV
which
only
implements
the
D
function,
not
the
other
ones,
because
it
doesn't
do
processing.
So
you
get
distributed.
N
Experiences
really
cute:
we
ran
this
as
a
demo
as
a
concept
demo
in
February
this
year,
I
said
work
to
standard
Android.
What
we
described
in
the
draft
is
what
are
some
of
the
technologies
we,
you
can
do
this
all
as
a
vertically
integrated
demo
works
perfectly.
The
whole
point
about
standards
is
to
do
this,
obviously,
in
a
way
that
it
works
not
just
for
our
demo,
but
it
works
for
everybody
right.
So
we
outline
certain
areas
in
the
draft
that
we
feel
work
needs
to
be
done.
N
Some
has
to
do
with
application
packaging,
not
entirely
sure.
That's
an
ITF
area,
but
nonetheless
describe
that
which
is
usually
done
at
design
time
and
what
we
do.
We
defined
wrapper
classes.
We
expose
those
we
are
in
the
process
of
injecting
them
into
the
open
source
community,
so
they
can
be
used.
You
can
also
do
profiling
to
do
this.
N
Actually
at
runtime,
that's
the
real
fun
part,
so
you
have
an
application
that
hasn't
been
designed
at
the
micro
Service
and
it's
being
pulled
apart
into
micro
servers
at
runtime,
and
then
it
starts
bouncing
about
in
the
network.
That's
actually
quite
cool.
We
hope
to
demonstrate
that
in
a
couple
of
months
time,
another
various
service
deployment.
How
does
the
P
server
makes
it
there
I
remember
the
one
before
so
there
was
the
processing
server
in
the
network.
We
actually
have
done
this
by
combining
application
installation
with
service
orchestration.
N
When
you
install
the
app,
we
are
not
using
the
actual
Android
application
installer
we
do
use,
but
we
have
a
an
extended
version
which
not
only
installs
the
application,
but
it
points
out
of
the
asset
package
of
the
application
package,
a
service,
orchestration
template
and
pushes
it
into
the
netbook
and
says
dude
I
need
a
processing
server.
Can
you
please
install
one
for
me
right
and
the
packages
over
there
again
there's
a
lot
of
standardization?
N
You
you
would
need
to
do
to
make
that
work
outside
a
vertically
integrated
demo
and
and
the
integration
with
the
app
installation
model.
It's
really
quite
cute,
because
application
users
just
see
an
application
installation.
They
don't
really
see
a
service
deployment.
So
that's
one
of
the
reason
why
we
like
that
service,
routing
I
too
talk
about
stuff
bouncing
about
now.
Obviously
that
requires
that
you
have
in
place
a
service.
N
Routing
infrastructure
is
usually
done
as
a
combination
of
DNS
and
IP
routing,
our
bouncing
about
is
relatively
flexible,
so
we
use
ongoing
work
in
the
SFC
working
group,
which
is
now
going
smooth.
The
ISE
route
on
so
called
named
service
function,
chaining
as
well
as
service
routing
over
to
environments,
but
allows
you
to
actually
do
this
really
flexibly.
So
we
have
a
couple
of
demos
where
you
will
see.
N
If
you
do
this
in
a
standard,
DNS
plus
IP
system,
it
won't
be
required
work
because
the
city
is
not
there
and
we
describe
this
issue
in
the
traffic
as
well.
The
Dynamis
'ti
of
in
network
computing,
in
particular,
when
you
bounce
functions
around
based
and
use
interactions,
I
walk
into
the
room.
I
suddenly
want
to
go
for
my
actual
discipline
and
the
mobile
I
want
to
jump
onto
the
display
in
the
room,
that's
very,
very
flexible,
and
it
requires
solution
for
service
routing
that
are
probably
different
from
the
ones
we
know
service
pinning.
N
We
describe
that
based
on
the
use
case,
if
you
have
two
resources
that
actually
implement
the
same
micro
service,
how
do
I
make
sure
I'm
actually
using
the
one
that
I
really
want
to
use
and
think
about
the
display?
It
might
matter
an
awful
lot
on
which
display
you
actually
display
stuff,
even
though
it's
the
same
d
micro
service
I
really
want
to
have
that
one
though,
and
have
the
other
one,
and
for
various
reasons
the
pinned
relations
can
change
very
frequently
your
requirements,
your
constrains
change,
you
know,
I,
might
walk
in
a
different
room.
N
I
do
want
to
go
use
actually
the
other
display,
so
the
pinning
has
to
change
frequently
as
well
all
right
and
how
do
I
do
this
in
a
standardized
manner.
That
will
allow
me
to
do
this
and
the
last
area
that
we
describe
the
state's
organization
we
do
work
with
a
mixture
of
stateless
as
well
as
stateful
microservices
state.
You
can't
enforce
only
one
of
them
and
therefore
state
synchronization
is
very,
very
crucial.
Typical
use
cases
that
are
very
very
stateful
are,
in
particular
gaming
use
cases
where
you
indeed
need
on
good
state
civilization.
N
So
these
are
the
areas
and
I
said
we
have
a
couple.
They
are
reference
in
the
draft.
You
can
see
to
ongoing
work
that
we
also
are
currently
doing
the
ITF
and
other
working
groups,
and
not
generally,
we
try
to
keep
those
sections,
as
these
are
cool
areas
to
work
on
I
think
we
need
some
work
this
in
order
to
move
into
standardized
environments,
and
it
would
be
good
to
have
a
place
for
this
alright.
N
So
the
conclusion
really
is
really
trying
to
stick
to
my
ten
minutes
is
that
the
trapped
positions
in
evolution-
and
we
tried
to
stick
this
to
the
app
model,
because
everybody
knows
that
I
should
be
played
this
story
to
particular
people
who
don't
really
quite
understand,
but
an
Internet
really
is
with
a
sense
of
IP
packets.
We
played
this
to
teenagers
and
they
love
this
story
alright
by
for
them
the
internet
or
application
installations.
N
So
the
whole
idea
that
I
install
my
private
Internet
by
having
an
application
is
something
that
really
goes
done
really
well
and-
and
we
also
like
the
idea
of
moving
from
the
mobile
terminal
experience.
As
we
know
today,
everybody
has
a
smartphone
to
something
add
in
handy
disintegrates
into
distributive
experiences,
and
that's
the
the
reason
why
we
made
that
leap,
but
generally
I
feel
obviously
that
the
points
also
blight
the
general
micro
service
use
cases
its
itself.
I
said
we
interpret
available
compute
resources
as
these
peak
or
micro
data
centers.
N
They
can
be
dedicated
computer
resources.
In
the
demo
we
gave
this
year.
We
played
the
scenario
in
a
entertainment
scenario
hotel
where
the
compute
resource
was
provided
by
the
hotel
for
hotel
guests.
So
in
that
case
you
might
really
have
a
computer
act
downstairs
right,
but
Pico
data
centers
could
be
home
devices.
It
could
be
another
user
smartphone
and
be
also
actually
in
the
use
case
jumped
on
somebody
else's
smartphone.
The
smartphone
was
plugged
in
so
therefore
it
was
a
suitable
data
center.
N
We
believe
that
the
post
corner
G
would
be
a
really
really
good
platform
to
actually
discuss
some
of
these
issues,
bring
them
together,
evaluate
them,
but
also
obviously
link
to
ongoing
efforts.
A
lot
of
these
efforts
are
ongoing
in
other
working
groups
and
therefore
bringing
them
together-
and
probably
also
you
know,
get
more
people
involved
in
the
discussion
is
a
really
really
good
thing.
Some
of
the
comments
that
I
put
on
the
list
already
related
to
that.
So
there
were
a
couple
of
things
that
I
had
sent
to
the
list.
N
The
next
steps,
and
personally
I
really
hope
that
they
propose
research
group
is
indeed
approved.
We
plan
on
doing
is
to
update
the
draft
with
more
information
on
the
ongoing,
so
be
quite
clear
or
what's
going
on
as
that
they
are
references
in,
but
we
haven't
really
described
in
detail,
but
also
provide
an
overview
of
other
realizations,
because
in
the
currently
we
are
referring
to
a
lot
of
things
that
we
are
involved
in
there's
other
things
and
also
we
plan
on
demonstrating
realizations,
maybe
at
an
upcoming
meeting.
F
N
So
we
saw
the
reference
at
behalf
of
the
draft,
is
to
name
service
function
training,
so
we
actually
extend
it.
I'll
be
proposed
to
extend
service
function,
change,
which
we
saw
in
the
presentation
before
too
on
entire
name
basis.
So
you
issue
a
URL
based
requests,
rather
than
IP
voice
requests
and
the
the
service
function,
as
was
presented
in
the
service
function,
training
one
before
it's
always
associated
with
some
form
of
control.
We
don't
use
a
manner
control.
Obviously
our
service
function
changing
is
defined
in
the
application,
see
the
application
in
the
application
design.
N
N
O
T
Okay,
hello,
everyone-
this
is
polio
from
China
mobile
and
I,
was
talking
about
the
new
draft
requirements
of
computing
in
the
network.
So,
as
we
all
know,
computing
in
network
becomes
a
nutrient
to
meeting
the
needs
of
emerging
business.
What
needs
to
be
computed
and
why
there
are
several
problems
need
to
be
considered,
such
as
the
traditional
network
protocols.
T
Only
optimize
traffic,
which
can't
guarantee
the
latency
of
packet
loss
rate
and
the
centralization
of
computing
resource
is
not
efficient
and
for
the
different
business
there
are,
they
may
need
different
kind
of
computing
and
they
are
little
interaction
among
users
of
applications
and
the
networks,
which
means
that
they
don't
know
each
other's
requirement
and
the
capabilities.
So
some
work
has
began
to
consider
these
issues,
but
more
comment,
words
to
be
considered,
and
the
number
one
requirement
is
deterministic
network
abilities
which
include
the
latency
and
packet
loss
rate.
T
So
for
the
latency,
it's
the
concept
from
in
time
to
anthem,
which
means
the
latency
is
not
necessarily
the
lower
the
better.
It's
just
like
to
size,
agreement
and
Sacre
Edition
and
for
the
packet
loss
rate.
It
includes
the
time
bearing
routing
which
found
the
link
time
very
regular,
based
on
AI
and
the
predicting
the
network
performance,
and
there
are
also
other
technology,
such
as
segmented
transmissions,
to
enhance
to
achieve
segmented
retransmissions,
and
the
number
two
requirement
is
computing.
T
And
the
last
requirement
is
network
Brown,
who
ability,
and
so
in
network
programming
and
the
resource
of
network
and
the
computing
information
can
be
transferred
by
network
to
users
so
L
to
the
requirements
of
the
information
transmitted
by
user
to
network.
So
the
network
can
configure
parameters
according
to
the
user
users
needs
and
the
users
transfer
requirements
based
on
network
abilities,
which
could
efficiently
support
the
future
application.
So
in
the
next
steps,
more
requirements
might
need
to
be
analyzed.
S
So
actually,
this
is
just
a
little
bit
of
historical
stuff
on
it.
The
original
spec
for
is
is
had
a
per
node
cost
of
traversing.
That
node,
which
was
called
the
hippity
cost.
So
your
packers
to
go
hop,
hippity,
hop,
hippity,
hop,
hippity,
hop
and
back
then
we
took
it
out
as
a
gratuitous
complexity,
because
there
there
wasn't
much
difference
between
the
costs
of
forwarding
a
packet
on
one
node
versus
another
node.
It
may
be
time
to
bring
that
back.
T
A
Thank
you
very
much,
so
we
have
just
a
few
more
minutes,
so
we
intend
to
have
a
virtual
interim,
probably
in
early
October,
we'll
do
an
email
poll
and
we
would
like
to
address
probably
the
last
version
of
the
Charter
and
hopefully
a
set
of
milestones.
I
get
the
message
from
Dirk
the
first
Dirk
Dirk
k
about
having
milestone
while
you're
still
it
proposed
that
proposed
RG,
but
we
kind
of
like
the
idea
of
focusing
the
work.
A
We're
obviously
are
going
to
meet
at
ITF
106,
which
will
be
our
third
meeting
and
hopefully
we'll
be
accepted
before
that,
and
thank
you
so
very
much
for
everybody
who
came
and
for
all
your
support,
actually
I
think
we're
calling
themselves
the
gems
so
that
the
Jeffrey,
Eve
and
I
are
really
happy
of
how
this
is
going,
and
we
really
thank
you
for
all
your
great
work.
Thank
you.