►
From YouTube: IETF112-COINRG-20211111-1200
Description
COINRG meeting session at IETF112
2021/11/11 1200
https://datatracker.ietf.org/meeting/112/proceedings/
B
Are
welcome
everyone.
This
is
the
computing
in
the
network,
research
group
meeting
with
jeffrey
he
eve,
schooler
and
and
me
jose
mourpitzi.
We
are
the
chairs
of
this.
I
think
exciting
research
field
that
I
think
is
is
more
and
more
important
in
the
development
of
the
internet.
B
I
think
everybody
is
aware
of
this.
I
think
it
for
this
group.
Yes,
it's
important,
but
also
you
know
that
we're
recording
this,
because
we
are
still
virtual
and
I
guess
even
when
we'll
go,
not
virtual
we'll
do
it.
I
think,
what's
also
very
important,
is
all
the
disclosures
of
intellectual
property
which
is
on
on
the
next
slide
or
something
and
the
code
of
conduct,
which
is
also
very
important,
and
I
think
a
few
of
us
have
had
experiences
that
show
us
how
important
it
is
eve.
A
And
can
you
hear
me,
am
I
not
on
on
this.
B
Copyrights
and
everything
next
one
sure,
but
maybe.
B
Important
okay:
we
are
meeting
at
the
same
time
as
the
ietf,
but
the
irtf.
B
You
will
see
from
the
agenda
that
the
there
are
important
research
papers
that
are
going
to
be
presented
today,
and
I
know
there
was
an
audio
issue
and
I
think
I
fixed
it.
So
thank
you
for
the
people
who
mentioned
it
next
slide
eve.
B
So
we
have
a
very
packed
agenda,
so
you
know
I'm
already
two
or
three
minutes
into
this,
so
we're
going
to
go
very
fast.
I
think
we're
very,
very
happy
to
have
scott
chanker
to
present
the
extensible
internet,
which
was
a
ccr
paper
and
essentially
raises
a
lot
of
interesting
questions
about
the
future
of
the
internet
and
also
is
is
connected
to
some
of
the
questions
that
we've
had
in
this
group.
So
we're
very
lucky
with
this.
Then
we
have
the
very
interesting
in
network
aggregation
paper.
B
Then
also
the
information-centric
data
flow
for
distributed
computing
from
dirk,
and
we
we
had
well,
it's
actually
ideas,
but
actually
there's
only
one
alessandro,
because
it's
a
holiday
is
not
going
to
present.
I'm
going
to
present
this
it's
about
an
operating
system
for
distributed
applications
and
in
coin,
we
are
very
big
on
thinking
that
the
internet
is
moving
to
be
more
like
a
computer
board,
and
if
we
have
a
computer
board,
you
need
an
operating
system
next
slide,
then
we
have
the
drafts
update.
B
Obviously
a
research
group
is
not
too
about
doing
only
drafts,
but
we
do
have
very
interesting
work.
This
is
happening
in
the
drafts
with
this
research
group
and
we
would
like
to
give
them
a
chance
to
talk
about
the
ideas
so
use
cases
again
to
show
that
there,
you
know,
there's
a
way
to
use
these
things
transport
protocols,
because
the
this
has
been
an
issue
once
you
start
putting
computing
in
the
system
and
then
the
security
and
privacy
and
again
a
big
issue.
B
We
have
a
new
draft
that
was
submitted
by
china,
mobile
and
then
we'll
have.
B
We
would
love
to
have
10
minutes
left
to
talk
about
future
groups
and
future
meetings,
and
we
would
like
this
a
little
bit
to
evolve
into,
but
I
think
the
main
goal
is
to
have
the
papers
presented.
So
we're
going
to
do
this.
The
meat
deco,
please
you
found.
B
Here
this
is
good
yeah,
you
found
you
know.
If
you're
there
you
found
the
meat
deco,
which
is
good
the
live
minutes,
it's
integrated
to
meet
techo
codim,
the
mailing
list.
Well,
if
you're
here,
you
probably
know
about
our
mailing
list,
if
you
know,
if
you
don't
well,
please
do
that
and
you
know
we
have
a
ton
of
documents.
B
Turk,
your
document
expired
that
needs
an
update,
because
it's
an
rg
document.
We
have
two
rg
documents.
We
have
again
a
new
new
drafts.
I
think
this
needs
an
update
because
we
have
the
new
draft
from
china
mobile
also
and-
and
we
have
a
ton
of
other
drafts
that
you
know
are
a
different
level
of
maturity,
and
essentially
this
is
something
that
we
will
raise
on
the
list.
The
idea
that
we
need
to
do
something
about
it.
B
The
milestones
we've
done
a
lot
of
it
and-
and
we
need
I'll
go
very
fast,
because
I
know
one
minute
over
we'll
go
very
fast
to
the
last
thing,
which
is,
we
need
a
milestone,
review
and
we'll
do
it,
and
now
we
have
the
presentations
and
the
first
one
is
scotch
anchor
from
berkeley
and
the
extensible
internet.
So
scott.
Thank
you
very
much
for
doing
this,
because
it's
so
early
in
california.
F
F
This
is
based
on
I'll,
be
talking
about
the
extensible
internet
and
this
is
based
on
a
ccr
editorial
with
18
authors,
there's
a
much
smaller
group
that
is
trying
to
actually
bring
this
into
reality,
and
I
gave
a
talk
on
this
at
another
research
group,
and
so
I
apologize
for
people
who
saw
that
talk
adrian
and
others,
but
this
will
be
a
large
overlap,
but
the
goal
of
this
talk
is
really
to
give
a
very,
very
brief
overview
of
extensible
internet
just
enough
to
initiate
a
discussion
both
about
the
merits
of
the
proposal,
but
also
how
it
relates
to
the
the
charter
of
this
research
group
and
obviously
more
details
would
be
available
in
follow-up
conversations.
F
So
the
core
subject
of
the
talk
is
really
about
architectural
change.
So
when
I
say
the
word
architecture,
I'm
really
referring
to
the
arrangement
of
the
data
plate,
functionality,
that
is
the
layers
and
the
basic
functions
they've
been
assigned.
It's
not
about
specific
protocols
ipv4
to
ipv6.
It's
not
an
architectural
change
in.
In
my
lexicon,
nor
is
it
about
the
control
plane.
F
Now,
there's
been
decades
of
architectural
research
trying
to
make
changes
to
the
the
basic
architecture,
but
I
I
think
it's
pretty
clear
that
you
know
over
20
works
of
20
years
of
clean
slate,
architectural
research,
there's
no
discernible
architectural
impact
and
the
public
internet,
at
least
in
the
eyes
of
certainly
the
researchers,
I
talked
to
seems
doomed
to
architectural
stagnation.
That
we've
tried
it.
Just
we
don't
see
any
movement,
but
that's
not
what
the
hyperscalers
think
the
cloud
and
content
providers
are
building
their
own
large
private,
ip
based
networks.
F
They've
got
many
points
of
presence
and
what's
relevant
to
this
working
group.
These
points
of
presence
apply
extensive
in-network
services,
flow
termination,
caching,
load,
balancing
and
so
forth,
and
these
services
have
had
a
very
significant
impact
on
customer,
latency
and
reliability,
so
they
put
a
lot
of
money
into
it
because
they
actually
see
very
tangible
benefits
in
their
user
community.
F
F
F
That
is
it's
in
too
many
places
it's
got
to
be
implemented
with
very
good
price
per
performance,
so
it's
typically
baked
into
hardware,
so
changing
that
is
impossible.
Now
and
in
the
future
I
don't
see
that
changing.
F
On
the
other
hand,
it
provides
the
service
model
to
host.
That
is
this
best
effort.
Packet
delivery
is
exactly
what
anything
you
want
to
do
on
a
host.
That's
the
service
that
you
see
and
you
have
to
build
on,
so
it
has
to
support
all
application
requirements
and
these
requirements
are
becoming
more
stringent,
which
is
a
reason
to
change
the
architecture
and
that's
exactly
why
the
these
hyperscalers
have
built
out
their
own
networks
to
extend
the
the
architecture
for
their
own
purposes,
so
they
can
meet
these
requirements.
F
But
it's
this
dual
role
that
prevents
change
because
you
have
demands
coming
from
above.
You
have
constraints
coming
from
below
and
they
are
meeting
in
the
middle
in
a
single
layer,
so
the
network
must
meet
additional
application
requirements,
but
the
only
layer
that
can
address
those
requirements
is
also
the
only
layer
in
the
architecture
that
can't
be
changed.
F
So
that's
the
the
nub
of
the
problem
that
they
coincide
in
the
single
layer,
both
the
demands
and
the
constraints.
So
the
second
question
is
well:
how
can
we
overcome
this
barrier
and
there?
This
is
where
the
extensible
internet
proposal
comes
in.
It's
really
very
simple:
you
use
the
current
ip
protocol
unchanged,
you
don't
do
anything,
but
you
do
introduce
a
new
layer.
Above
it
we
call
it
service,
layer
or
l.
3.5
and
the
service
layer
offers
new
in-network
services
to
host.
So
this
is
the
relevance
to
this
working
group
is
this
network
services?
F
This
is
flow
termination
and
caching,
and
and
beyond,
which
I
will
discuss
later,
and
the
point
is
this
is
really
the
architecturally
coherent
version
of
what's
going
on
in
these
private
networks.
This
is
really
trying
to
articulate
how
to
think
about
it
in
a
coherent
way.
F
So
why
does
this
solve
the
problem
because
it
decouples
layer
three's,
two
roles?
Currently
it's
the
interface
to
both
l2,
which
gives
it
the
constraints
and
l4
which
gives
us
the
demands
and
in
the
ei
l3,
is
still
the
interface
to
l2.
It
does
a
perfectly
good
job
of
that,
but
l
3.5
is
the
interface
to
the
hosts
and
there's
no
reason
for
l
3.5
to
be
in
every
router.
F
F
All
the
service
layer
communication
is
tunneled
over
ip
and
the
source
specifies
which
service
to
invoke
using
the
tunneling
protocol
when
the
packet
gets
to
the
service
node
and
it's
this
ability
for
clients
to
signal
to
the
service.
Node,
what
service
they
want
that
allows
this
to
go
beyond
what
the
hyperscalers
are
doing,
which
needs
to
be
backwards,
compatible
to
actually
we're
able
to
offer
services
like
multicast,
where
the
host
needs
to
know
that
you've
changed
the
semantics
of
what
they've
asked.
So
this
is
the
basic
outline
of
the
design.
F
F
Now
the
key
point
here
is
that
all
these
services,
that
these
in-network
services
that
I
talk
about
are
in
software,
so
standards
are
not
detailed.
Written
specifications,
they're
open
source
code
and
there
are
three
necessary
software
components
on
the
service.
Node
one
is
the
service
modules
that
is,
for
every
service,
whether
it's
multicast,
whether
it's
ddos
protection.
F
There
is
a
service
module,
that's
implemented,
that's
running
on
a
service
node,
it
runs
in
a
standardized
execution
environment.
If
this
execution
environment
has
a
very
simple
set
of
primitives
pack,
it
in
out
ephemeral,
storage,
stable
storage,
maybe
one
or
two
others,
but
it's
a
write
once
run
anywhere
environment.
If
you
write
your
service
module
to
run
inside
this
environment,
it
can
run
on
any
service
node
and
then,
of
course,
in
the
service
node.
You
need
some
kind
of
runtime
or
orchestration
to
scale
up
scale
down
recover
from
failures.
F
This
doesn't
need
to
be
standardized.
You
know,
there'll
be
open
source
versions
available,
whether
it's
kubernetes
or
openstack
or
whatever
comes
next,
and
when
I
talk
about
in-network
services,
this
is
limited
computation.
This
is
not
where
you
run
your
machine
learning
jobs.
This
is
basic
packet,
forwarding,
payload,
processing,
simple
functions
like
caching,
so
it's
more
complicated
than
just
simple
ip
forwarding,
but
it's
not
general
computation,
it's
fairly
limited
and
these
service
nodes
don't
just
have
to
be
generic
processors.
F
They
can
have
secure
enclaves,
they
can
have
hardware
accelerators,
but
everything
you
do
has
to
run
on
a
commodity
processor,
but
if
there
happens
to
be
an
accelerator
that
can
be
used
to
to
get
better
performance
now
choosing
the
services.
This
is
where
you
have
some
kind
of
governance
process,
whether
it's
itf
or
something
else.
You
know
it's
way
too
early,
but
there's
some
body
that
decides
what
are
the
set
of
public
services
and
and
their
implementations
and
all
of
these
public
services
are
run
on
all
service
nodes.
F
That
is,
if
you're
offering
internet
service
so
you're
supplying
a
service
node.
You
have
to
download
all
of
the
service
modules
that
are
in
the
service
model
that
have
been
approved,
and
so
this
is
really
the
biggest
change
to
ei,
meaning
that
ei
brings
this
deployment
model.
That
is
because
they're
approved
software
modules
they
can
be
rolled
out.
They
can
be
deployed.
There's
no
per
vendor
per
domain
decision
process
of
is
cisco
going
to
support.
This
is
juniper
going
to
support
this,
that
is
18t
or
deutsche
telekom
going
to
deploy
it.
F
F
You
can
also
incorporate
other
frameworks
that
are
becoming
quite
popular.
They
can
be
running
on
these
service
nodes,
istio
and
then
oppa
for
the
policies,
various
telemetry
kinds
of
frameworks
and
then
support
for
radical
new
architecture,
something
like
icn,
whether
it's
the
donor
style
or
the
ndn
style.
This
now
just
becomes
a
service.
You
just
roll
it
out,
you're,
not
replacing
all
the
routers.
It's
just
a
piece
of
software
there'll
be
host
support,
obviously,
of
course,
for
all
the
services-
and
this
just
gets
rolled
out
like
another
standard.
F
So
it
really
lowers
the
bar
to
this
deployment
and
that's
why
we
call
it
the
extensible
internet,
because
once
you
get
this
framework
set
up,
it
makes
it
very
easy
to
extend
the
service
model
in
rather
radical
ways.
This
is
the
last
question.
F
F
Given
the
failures,
why
do
we
think
this
might
succeed
and
there
are
three
reasons:
one
backwards:
compatibility
ip
just
continues
to
be
used.
If
you
don't
want
to
change
anything,
you
just
use
ip.
You
know
maybe
20
years
from
now
that
will
go
away,
but
when
we
roll
ei
out
nothing
changes
about
ip
and
the
kind
of
resources
you
need,
you
just
need
the
service
nodes
which
could
use
edge
computing
or
existing
pops.
They
have
these
facilities
out
there
I
mean
we've
talked
to
some
carriers
and
they
say.
F
Oh
you
know
we
could
run
this
tomorrow
that
we've
got
all
these
facilities
available.
Obviously
there's
more
involved,
but
this
isn't
talking
about.
You
know
major
capital
outlays.
The
second
reason
is
fear
is
a
great
motivator.
You
know
the
internet
architecture
has
resisted
change
and
mark
handles
this
great
essay
on.
You
know
why
the
internet
just
barely
works
and
his
reasoning
is,
you
know,
sort
of
you
know.
Basically,
why
should
it
change
unless
it
has
to?
But
what's
implicit
in
his
essay?
Is
that
there's
no
alternative
that
the
internet
is
just
there?
F
F
So
there's
now
an
alternative,
and
so
the
internet
has
a
simple
choice:
it
either
changes
or
it
shrivels
to
being
a
last
mile
provider,
and
we
think
the
extensible
internet
is
one
possible
change
that
will
preserve
its
role,
and
the
third
reason
is
that
ei
is
based
on
a
simple
conjecture,
that
in
network
support
for
applications,
it's
important
for
current
and
emerging
apps
can
be
done
at
service
nodes
rather
than
each
router
can
be
done
in
software.
F
Not
in
hardware,
and
the
point
is
the
private
networks
have
proven
every
aspect
of
this
conjecture:
they're
up
in
the
running
they're
running
at
scale,
they're
running
with
real
traffic
at
an
unimaginable
scale,
so
ei
is
really
the
architecturally
co-version
of
what
of
the
approach.
These
private
networks
have
already
proven
to
work.
So
that's
why
we
think
it
might
succeed.
F
So
my
last
real
slide
is
you
know:
where
are
we?
We
built
a
prototype,
really
murphy
mccauley?
Who
has
built
the
prototype
while
teaching
four
classes
a
week,
so
progress
has
been
a
little
slower,
but
you
know
we
want
to
finish
development,
which
should
be
in
a
couple
of
months,
deploy
it
on
fabric
and
other
test
beds,
engage
the
community
by
providing
this
test.
F
But
we
say:
if
you
have
a
new
service,
we
can
deploy
it
if
you
want
to
write
applications
on
top
of
these
services
like
pub
sub
or
whatever
you
can
do
that
too.
Continuing
our
discussions
with
industry
and
we'd
love
to
have
your
participation
you
here,
meaning
as
individuals
as
a
research
group
or
the
broader
community.
With
that,
thank
you
and
I'll.
Take
your
questions
and
eve.
Why
don't
you
kick
it
off.
A
C
Thank
you
eve
and
thank
you
scott
for
for
this.
As
you
said
the
second
time,
I've
sort
of
heard
this,
and
and
finally,
it's
sinking
in
a
little
bit.
My
question,
which
draws
on
some
of
the
the
stuff
in
the
chat,
is
about
the
layering
here
and
in
particular
the
relationship
with
with
transport
protocols,
so
in
in
your
very
simple
example
of
source,
snsn
dest
would
the
would
you
see
the
transport
protocol
running
end
to
end
so
source
to
destination?
F
So
there
will
definitely
be
an
end-to-end
reliability
layer
and
because
for
reliability,
we
want
the
failure
of
a
service
node
to
be
no
more
serious
than
the
failure
of
a
router
today.
So
they'll
definitely
be
end
to
end
whether
the
pipe
that
goes.
We
view
that
ip
is
providing
a
pipe,
whether
the
the
pipe
that
goes
between
these
two
implement.
Some
kind
of
reliability
is
an
open
question,
but
but
it's
not
mandated
by
the
architecture.
The
essential
transport
is
end
to
end
where
congestion
control
is
done.
C
Right
because
I
think
I
think
this
sort
of
factors
into
things
like
transport
layer,
encryption
as
well,
and
whether
the
sms
are
able
to
access
the
data
to
do
anything
unless
those
transport
sessions
are
terminated
and
restarted,
and
this
is
kind
of
making
me
wonder.
I
I
don't
dispute
at
all
the
all
your
points
about
needing
to
introduce
some
additional
layering
to
to
get
the
development,
and
I'm
just
wondering
whether
this
is
really
go
should
go
in
at
layer
4.5
rather
than
at
3.5,
but
I'll.
Let
others
talk.
F
Well,
I
actually
can
I
I
mean
this
is
a
very
active
area
of
investigation
for
us
and
our
current
thinking
is
that
the
end
to
end
first
of
all,
all
the
pipes
will
be
encrypted
at
the
low
level,
but
but
that's
trivial,
but
and
service
node
to
service
node
pipes.
But
at
the
end-to-end
transport
layer
there
will
be
encryption,
but
I
we
want
to
have
an
option
for
the
endpoints
to
say
here
are
the
parts
that
we
are
willing
to?
F
Let
the
intermediate
node
see
versus
here
are
the
parts
that
they
can't,
and
so
they
can
decide
like
they
want
to
take
care
of.
You
know,
take
advantage
of
caching,
then
they
might
expose
certain
aspects,
but
if
they
want
to
keep
certain
material
private,
they
don't,
and
so
we
want
to
give
that
leave
that
flexibility
to
the
application
itself.
C
I
yeah
thank
you,
and
I
I
kind
of
take
that,
as
this
needs
to
be
thought
about
more
and
and
nailed
down
the
details.
A
Okay,
why
don't
we
dirk
trossen
you're
next
in
the
queue.
I
Yes,
thanks
thanks
scott
for
for
the
presentation
similar
to
adrian
some
things
that
I
heard
again
just
slowly
sang
in,
and
there
was
one
issue
that
I
stumbled
across,
which
is
the
sn
must
implement
public
services.
We
need
to
agree
somewhere,
possibly
in
the
itf
by
governance.
I
Isn't
that
a
barrier
to
entry,
though
I'm
I'm
not
really
quite
understanding.
The
reason
for
doing
so,
because
also
the
culture
asks
the
right
question:
who
has
the
incentive
to
really
deploy
this?
But
if
I'm,
if
I
have
to
ramp
up
my
sn
deployment
by
really
essentially
running
any
public
service
over
it,
isn't
it
preventing
me
for
just
quickly
roll
out
my
owner's
end,
where
all
I
want
to
do
is
run
my
service
on
it,
because
it's
a
low
barrier
to
entry.
F
That
our
point
is
that,
once
you
support
the
execution
environment,
implementing
these
other
services
is
simple.
It
it
does,
require
extra
resources,
but
if
you
don't
make
it
uniform,
then
the
internet,
you
know
the
beauty
of
the
growth
of
the
internet.
Is
you
knew
wherever
you
plugged
in
you
had
a
set
of
services?
F
You
could
depend
on
if
we
now
go
to
the
space
of
ip
options,
which
you
know
is
no
fun,
that's
not
going
to
work,
and
so
that
really
is
critical
to
this,
that
you
can
build
an
application
that
relies
on
a
public
service
and
that
will
be
available
wherever
the
user
is
so
that
that's
critical
part
of
the
proposal,
and
given
that
it's
not
about
changing
equipment,
it's
just
about
the
scaling
of
your
service
nodes.
We
think
there's
a
chance
that
that's
gonna.
I
Take
off
so
so
so
maybe
I
misunderstood
your
your
slide,
then
so
you're
saying
because
of
the
standardized
execution
environment,
you
can
run
any
of
the
public
services
on
any
of
these
ends
that
have
been
deployed,
but
I
don't
so
if
somebody
comes
and
wants
to
run
a
public
service
on
my
execution,
environment
or
my
sn,
it
will
just
simply
run
because
it's
a
standardized
execution
environment
doesn't
mean
I
have
to
provide
it
as
a
catalog
of
possible
services.
F
No,
no,
it
is
the
let's
say
it's
the
itf.
The
iatf
decides
what
the
standard
services
are.
Every
service
node
that
claims
it's
supporting.
The
internet
has
to
run
all
the
services.
F
F
That's
the
service
model
composition
bottle.
What
do
we
have
for
your
attention?
No,
no!
No!
What
I
mean
is
we.
F
We
specifically
do
not
allow
users
to
arbitrarily
compose
services
that
there
will
be
sort
of
sets
of
approved
compositions
that
work,
because
we
we
think
it
is
an
impossible
way
to
provide
a
general
rule
for
what
can
compose
with
what
so
yeah
that
that
is
one
of
the
things
that
if
you
wanna
you
can't
just
say
well,
I
want
services,
28
and
37,
because
you
know
what,
if
it's
some
security
measure,
that's
point
to
point
and
the
other
one
is
a
multicast
so
that
we,
we
actually
sort
of
proactively,
say
that
kind
of
linking
is
done
at
the
definition
of
the
services
level,
not
at
the
user
linking
them
together.
F
So
there
might
be
like
a
particular
stack
that
says:
okay,
you're
going
to
get
an
ip-like
service
and,
on
top
of
it,
you're
going
to
get
this
kind
of
transport
service.
And
on
top
of
that,
there's
going
to
be
this
kind
of
ddos
prevention,
and
you
know
whatever,
as
a
a
set
of
what
we
might
think
of
individual
features.
F
E
So
then,
one
quick
follow
on
which
is
okay,
so
from
the
point
of
view
of
the
user,
they
see,
you
know
a
composed
service
as
what
what
they
talk
to
the
service
node
about.
What
about
the
internal
communication
among
services?
Is
this
some
sort
of
private
interconnect?
Is
it
part
of
the
architecture?
F
Yeah,
so
in
our
current
implementation,
what
we
would
do
is
we
would
say.
Well,
if
this
is
the
composition,
then
we
would
have
that
ball
of
code.
I
mean
take
those
several
different
pieces
of
code,
merge
them
together
and
then
have
them
run
as
a
single
service
module,
so
that
we've
actually
made
sure
that
these
code,
that
the
code
bases
work
together
and
and
so
we're
not
doing
enough
chaining,
it
is
actually
we
decide
to
put
it.
F
J
Always-
and
it
looks,
it
looks
like
an
interesting
topic
to
me,
and
it
also
reminded
me
of
two
things:
the
first
one
is
service
function,
channing
sfc
and
the
next
one
is
sd1.
So
I
do
see
that
these
two
jose
service
has
the
same
or
similar
model
of
this
I'll,
say,
ei
network.
F
So
I
I
think
for
both
of
them
it
is
the
extensibility
so
for
service
chaining,
you
tell
it
to
go
through
various
boxes,
but
it's
not.
The
network
is
not
making
you
a
promise
of.
I
will
deliver
multicast
packets
for
you.
It
is
go
through
this
box.
This
box
can
do
something,
but
but
service
chaining
doesn't
give
you
any
global
service
definition
that
that's
your
job.
F
You
have
to
sort
of
you
know
if
you
say
you
go
through
a
firewall
and
then
go
to
this
and
go
through
that
you're,
the
one
figuring
out,
eventually
what
what
that's
giving
you.
So
I
think
that's
the
difference
so
something
like
ddos
prevention,
that
you
know
we're
architecting
it
to
provide
it's
a
service-wide
thing:
it's
not
just
a
single
box,
it's
not
just
a
scrubbing
box!
So
that's
what
I
think
that
the
key
difference
is.
I
don't
know
whether
that
helps.
A
And
there's
a
pointer
to
the
paper,
I
think
it's
it's
got
could
confirm.
I
think
it's
an
is
it
an
april
ccr
paper,
yeah,
yeah
and
but
we
have
a
pointer
in
the
agenda
and
slides
and
I
see
there's
one
other
person,
who's
joined
the
queue
tng.
D
Oh,
yes,
hey
scott,
it's
a
good
presentation
here
ever
one
thing
to
ask
about
your
ei
here
you
know
it's
running
on
the
ip
and
the
ip
normally
has
some
protocols
to
communicate
among
all
the
entities
so
for
your
ei
as
a
3.5.
D
F
So,
yes,
we
will
have
some
kind
of
discovery
process
for
service
nodes
to
discover
each
other,
but
if
you
want
to
have
an
sdn
like
control
for
a
domain
to
have
the
service
nodes
know
about
each
other,
that
would
be
fine,
so
the
control
plane
we.
I
was
completely
silent
on
the
control
plane
for
this
we're
open
to
a
wide
variety
of
how
you're
going
to
manage
your
domain.
A
It's
everybody
in
the
queue,
but
I
wonder
I
know
there's
a
very
thriving
conversation
going
on
back
and
forth
on
the
chat.
Is
there
anybody
from
there
who
would
like
to
introduce?
I
think.
B
I
think
we
we're
getting
so
late
eve
we're.
We
need
to
send
this
to
to
the
to
the
chat
and
offline
to
the
list,
because
we're
already.
A
Sounds
fine,
scott,
thank
you
so
much
for
coming
in
and
initiating
the
discussion
about
this.
No
doubt
there
will
be
further
conversation,
so
thank
you
very
much.
F
Okay,
thank
you
all
and,
and
please
feel
free
to
contact
me
if
you
know
whatever
is
going
on
in
the
chat.
Let
me
know
if
there's
anything
I
can
help
with
thanks
very
much.
A
And
you
have
access
to
the
chat,
so
you
can
peruse,
what's
been
stated
there
and
the
and
the
chat
actually
gets
posted
after
the
fact.
Okay,
when
say
over
to
you
you're
our
next
presenter.
K
Here
we
we
viewed
our
working
data
center,
so
we
have
the
flexibility
to
customize
everything
and
it's
different
from
the
internet.
I'm
happy
to
introduce
our
work
in
network
application
for
multi-tenant
learning.
So
this
is
a
joint
work
from
colleagues
from
chiang
mai
university
and
the
university
of
wisconsin
medicine.
K
Machine
learning
algorithms
is
used
in
various
scenarios,
such
as
natural
light
reprocessing
computer
reading,
so
with
the
increasing
size
of
the
data
set
and
the
model,
so
the
algorithm
is
implemented
as
a
as
a
distributed
system.
So
the
ps
architecture
is
a
typical
architecture
that
can
support
this
kind
of
distributed
system
in
the
ps
architecture.
K
K
So
recently,
the
chain
or
network
competition
provide
an
opportunity
to
solve
this
problem.
The
programmable
switches
offer
in
intrinsic
packet
processing,
so
the
switch
pipeline
has
registers
which
can
store
the
network
states
and
the
user
specified
programs
can
be
loaded
to
customize
the
packet
processing.
K
There
exists,
there
is
an
existing
work
that
applies
in
network
of
aggregation
to
to
distributed
chain
html.
This
work
target
a
single
chart,
a
single
rack
setting.
So
there
is
a
wrap
connect,
multiple
workers,
so
the
switch
is
offload.
The
ps
is
offloaded
to
the
switch,
so
the
workers
gradients
are
aggregated
in
the
switch
to
support
micro
jobs.
The
switch
resource
is
statically
partitioned
and
assigned
to
each
job.
K
So
we
think
this
network
aggregation
can
be
further
improved
because
in
in
much
in
in
motor
training,
the
algorithm
takes
takes
efforts
to
compute
and
communicate
each
iteration
so
for
static
allocation
when
the
job
is
in
is
in
competition
time.
K
This
switch
memory
would
be
a
email
would
be
a
list,
and
this
design
assumes
the
topology
is
a
star
topology,
with
a
switch
in
the
middle
connecting
multiple
workers,
so
it
should
be
extended
to
multi
multi-rack
setting,
for
example,
in
bird
training,
there
are
tens
or
hundreds
of
nodes
which
cannot
be
put
into
a
single
rack
and,
in
addition,
static
partition,
add
complexity
to
integrate
the
switch
memory
allocation
with
the
control
plane.
K
So
we
propose
our
solution
with
three
key
goals.
The
first
goal
is,
we
should
maximumly,
you
use
the
network
condition
for
performance
gain
on
targeting
the
production
network.
We
should.
The
solution
should
support
multiple
simultaneous
job
change
efficiently,
and
it
should
also
support
a
multi-rack
quality.
K
K
So
to
support
multiple
tenants,
we
do
not
statically
partition
the
switch
memory;
instead,
we
organize
it
as
a
whole
array
as
a
as
a
shared
resource
pool.
K
Then,
therefore,
for
aggregation,
each
packet
is
randomly
hashed
to
a
unit
aggregator.
K
So
the
assignment
use
a
hash
function
to
hash
the
job,
id
and
sequence
number
so
for
four
packets
from
one
job,
with
the
same
sequence
number,
but
from
different
workers,
they
would
be
hashed
to
the
same
position
and
get
a
repeated
there.
So
the
the
vision
result
is
sent
to
the
ps.
Then
the
ps
return,
the
result
to
the
switch.
K
K
First,
because
the
switch
the
aggregator
assignment
is
decentralized,
it's
possible
to
to
have
hash
clearance,
for
example,
job
tools,
gradient
packet
is
a
is
a
assigned
it's
hashtag
keeper,
which
is
occupied
by
another
job
job.
Three,
in
this
case,
all
the
packets
would
pass
through
the
switch
and
arrive
at
the
on
the
ps.
The
ps
do
the
aggregation
and
send
the
result
back
to
the
switch.
K
There
is
another
case
of
inconsistency
or
membership
inconsistency.
It
causes
incomplete
aggregation,
for
example,
the
first
packet
is
sent
to
a
aggregator
which
is
reserved
by
drop
three,
then
this
packet
it
passes
through
passes
through
to
the
ps,
but
at
this
time
job3
completes
its
application
and
deallocated
the
switch
memory.
The
grid
guitar.
The
remaining
package
is
sent
to
the
aggregator
and
the
aggregated
there.
So
both
the
switch
and
the
ps
have
a
partial
result.
They
are
waiting
for
each
other.
This
is
a
deadlock
and
they
are
what's
worse.
K
Is
that
there
is
no
return
packet
of
results
to
dialogue.
This
switch
memory
causing
a
memory
leak
problem,
so
we
design
a
re-transmission
antenna
and
host
with
the
duplication
in
the
switch
so
for
for
packet
a
it
has
that
it
is
a
stock
under
the
switch
and
the
ps,
but
it's
it's
a
following
package
may
be
maybe
successfully
aggregated
and
returned.
So
if
the
sender
observed
reduplicated
the
ack
the
result
packet,
it
would
re-transmit
the
missing
the
packet
that
is
missing
the
result
it
will
all
workers
would
retransmit
a
so.
K
The
the
regulator
is
a
has
a
bitmap
to
track
which
worker
has
already
participated.
The
aggregation
so
a2
to
a
n
would
not
be
aggregated
twice,
it
would
be
duplicated
and
the
a1
is
added
to
the
partial
results,
so
we
get
a
complete
result
which
is
sent
to
the
ps
and
the
reply
back.
K
So
with
this
design,
we
have
a
correct
protocol
that
can
guarantee
the
the
correctness
to
support
microreact
aggregation.
We
need
a
aggregation
hierarchy
in
the
topology.
Ideally,
this
aggregation
hierarchy
can
be
can
be
of
many
levels.
We
consider
the
data
center
topology
in
the
data
center
topology,
the
core
network
has
multiple
passes
and
it
uses
non-deterministic
locking.
K
So
we
it's
not
easy
for
us
to
pre-compute
the
hierarchy
in
the
topology,
so
we
only
implement
the
objection
in
the
course
torque
switches,
because
there
are,
we
points
that
the
gradient
packets
must
pass
must
pass,
must
traverse.
K
Currently
we
have
a
two
level
aggregation
design,
so
all
the
packets
would
aggregate
at
the
walkers
tour
first,
then
they
are
sent
to
the
ps4
and
the
second
level
aggregation
is
happening
at
the
ps4.
The
final
result
is
send
it
to
the
ps,
so
in
this
in
this
multiple
racks
part,
we
also
overcome
another
challenge,
because
the
higher
level
switch.
This
is
the
hierarch
aggregation
hierarchy
or
the
previous
in
the
paris
page.
K
So
in
each
of
the
guitar
we
have
a
big
map
which,
where
each
b
that
he
knows,
I
always
children.
But
so
there
is
one
bit
of
four
in
the
switch
two
which
indicated
the
switch
to
zero.
But
the
one
meter
here
cannot
denote
the
all
the
possible
aggregations.
These
always
work
because
there
are
two
workers.
K
There
are
four
possible
agreements
these,
so
we
overcome
this
challenge
that,
by
falling
back
to
the
ps,
we
use
the
one
in
the
higher
level
switch
to
denote
the
successful
aggregation
of
the
whole
sub
tree
and
in
all
other
cases
we
regard
it
as
a
feeder
and
send
it
to
the
ps
for
back
to
the
ps
processing.
K
So
the
previous
design
also
overcome
other
challenge.
Other
challenges
on
our
reliability
when
the
the
re-transmission,
with
the
duplication,
can
also
handle
the
packet
loss
correctly
and
one
pack
loss
happens
in
the
host
of
the
re-transmitter
and
the
bitmap
in
the
switch
can
guarantee.
There
is
exactly
wax
application
and
we
also
redesigned
the
congestion
control.
The
essential
problem
is
what
should
be
the
congestion
signal,
because
some
many
packets
are
consumed
in
the
switch,
so
they
do
not
have
round
sweep
time.
So
there
is
no
rtt.
K
We
use
the
ecl
as
the
congestion
signal
use
eimd
for
congestion
control
and
on
the
in
the
host.
The
switch
can
only
compute
on
integrals.
It
does
not
support
floating
point
arithmetic,
so
we
do
compensation.
We
scale
with
scale.
The
float
point
block
points,
two
integers
bioscanning
factor
and
this
because
we
skills
the
floating
numbers
it's
possible
to
have
overflow
at
the
switch
to
handle
this
problem.
We
just
use
the
fallback
mechanism,
we
reuse
the
fallback
magnesium
when
switch,
detect,
detects
our
flow
overflow.
K
So
we
implemented
the
user
space
networking
stack
on
hosts
and
we
implemented
a
network
of
rehab
services
in
a
switch.
K
So
in
the
evaluation
we
have
nice
servers,
so
little
automatic
workers
and
one
and
the
ps
we
compare
atp
with
other
piece
like
pictures
such
as
ps
architecture,
with
different
networking
stack
and
the
ring
or
reduced
architecture
with
different
steps.
K
K
Sometimes
it's
the
performance
skin
is
very
significant.
Atp
can
benefit
the
network
intensive
workloads
more
than
the
competition
intensive
workloads
and
comparing
atp
with
the
ring
or
reduce
with
hardware
acceleration
atp
is
slightly
better
than
ring
or
reduce,
but
more
importantly,
it
only
uses
half
of
the
batteries
that
we
already
use.
K
Then
we
show
the
performance
of
multiple
jobs.
We
compare
atp
with
the
static
memory
allocation
in
a
static
approach.
We
evenly
partition
the
solution.
K
K
Okay,
there
are
more
evaluations,
so
in
atp
we
co-designed
the
host
and
the
switch
logic.
The
switch
to
service
security
service
is
best
effort.
It
has
dynamic
resource
allocation.
The
host
networking
stack,
has
fallback
mechanisms
for
correctness,
guarantee
and
also
reliability
and
congestion
control
to
ml
jobs.
Such
a
design
can
provide
performance,
skill
and
correctness,
and
we
achieve
our
goal
of
multi-tenant
but
to
draw
support.
K
So
there
is
takeaway
so
usually
when
we
do
a
network
competition,
usually
we
can
get
a
very
significant
performance.
Scan
switch
computes
much
faster
than
a
server,
but
correctness
guarantee
is
very
difficult
because
the
network
competition
introduces
new
schematic
into
the
network,
for
example.
Practice
can
be
consumed
instead
of
lost.
Then
hosts
need
to
distinguish
these
cases
and
usually
we
do
switch
and
host
the
co-design.
K
B
Thank
you
very
much
for
this.
I
think
we
are,
you
know
we're.
It
was
great.
The
first
discussions
were
great,
so
we're
losing
a
little
bit
on
time.
Maybe
we
can
send
all
the
questions
to
your
paper
either
on
the
chat
or
on
the
mailing
list.
Thank
you
very
much
for
this
and
dirk
you're
next.
K
Yeah,
I
need
to
stop.
L
You
see
my
slice,
okay,
great,
okay,
yeah,
thanks
for
inviting
me
it's
great
being
with
you
again,
if
only
ritual,
but
let
me
tell
you
about
some
some
recent
work
on
a
system
that
we
call
information,
centric
data
flow,
that
is
a
product
of
our
picollo
research
project
that
investigates
there
are
new
ways
for
integrating
computing
and
networking.
L
L
So
in
coin,
so
far
in
my
view,
we
have
mainly
been
discussing
like
two
strengths
of
of
work,
so,
like
one
is
coming
from
like
data
plane
probability
and
then
seeing
how
this
could
be
put
to.
You
know
a
useful
purpose,
for
you
know
having
distributed
applications
and
improving
their
performance
and
so
on,
maybe
also
evolving
some
protocols
to
support
certain
use
cases
better.
L
The
other
strand
you
could
say,
is
coming,
but
from
the
distributed
computing
where
we
say
okay,
what
can
we
learn
from?
This
will
be
computing
and
how
does
it
affect
our
view
on
networking
and
maybe
then
re-imagine
the
relationship
and
in
the
end,
have
more
programmable
systems,
also
like
the
ones
that
scott
talked
about,
and
so
I
think
the
the
like
the
thesis
of
this
group
in
in
general
is
maybe
is
there
some
confluence
at
the
end
of
these
two
strains?
L
So
this
work
here
is
more
on
the
on
the
lower
strand
here,
and
so
this
is
a
paper
that
we
presented
at
the
acm
icn
conference
this
year
and
there's
also
an
associated
demo,
and
so
I
hope
to
get
you
interested
in
this
and
then
you
can.
L
I
invite
you
to
check
out
the
paper
and
the
demo
video
after
this,
so
in
distributed
computing,
we
know
that
there
are
many
different
types
of
interactions,
so,
like
simple
message,
passing
remote
method,
indication
data
sets
implementation,
key
radio
stores
and
so
on,
and
so
I
think
some
itfs
ago
we
presented
a
system
that
we
called
compute.
First
networking
that
was
essentially
a
yeah,
too
incomplete,
really
general
distributed
computing
system
based
on
an
icn.
L
L
Where
you
have
a
system
say
nodes
in
the
in
the
network
and
like
data
objects
that
are
sourced
at
some
some
endpoint
or
in
some
node
trigger
computation
and
other
nodes,
which
then
you
know
leads
to
new
designs
that
trigger
computation
somewhere
else,
and
so
this
deck
here
can
be
implemented
in
different
ways.
L
So
you
could
say,
because
this
model
is
so
simple,
and
so,
if
your
application
semantics
allow
it,
you
could
also
say
you
want
to
parallelize
the
execution
by
opening
up
a
second
subgraph
here
and
then
just
run
everything
faster.
If
you
have
the
resources,
for
example.
L
So
what
you
have
is
something
like
a
like
the
the
data
flow
specification
that
could
be,
you
know
laid
out
in
the
network
in
in
different
ways,
different
levels
of
parallelism
and
so
on,
so
that
so
the
semantics
in
this
case
here
in
this
like
word
count
example-
would
be
that
you
split
the
input
here
in
this
text-to-lines
box,
but
you
can
also
have
other
use
cases
where
you,
you
know,
reuse
the
same
input
for
different
types
of
computation,
of
course,
so
some
some
concepts
so
data
flow.
L
Can
this
fundamental
paradigm
can
be
used
to
implement
batch
as
well
as
stream
processing
so
in
steam
processing
you
you,
like
conceptually,
you
look
at
each
data
object
independently
in
an
unbounded
stream
of
data
in
batch
processing.
You
group
data
and
typically
the
systems
allow
you
to.
You
know,
implement
groupings
dynamically
so
based
on
some
predicate
or
some
time
window
specification
and
so
on.
L
So
windowing
is
a
common
concept
here
that
allows
this
grouping
and
slicing,
and
this
can
also
lead
to
situations
where
you
have
something
like
a
predicate
that
allows
you
to
put
like
one
data
object
into
like
multiple
windows.
For
like
consumption
by
by
different
functions,
for
example,
but
you
can
also
split
this
up
in
in
different
ways.
L
L
You
can't
really
predict
the
processing,
transport
delays
and
so
on
and
typically
what
you
try
to
achieve
is
that
you
match
the
production
rate
with
the
input
rate,
and
so
the
task
of
a
data
flow
system
then,
is
to
adjust
the
processing
graph
to
the
application
requirements
and
the
data
production
rate,
and
you
can,
you
can
see,
there's
all
like
some
kind
of
variable
performance
and
systems
can
be
compared.
How
well
they
maybe
keep
up
with
the
offered
load.
L
Perhaps
there
are
a
couple
of
really
widely
used
implementations,
so
apache
beam
is
basically
the
unified
programming
model
that
many
data
from
invitations
use,
so
sometimes
they're
called
runners.
So
you
may
have
heard
about
apache,
flink
spark.
Of
course,
google
cloud
dataflow
is
a
product
and
so
on,
and
the
picture
here
on
the
right
hand,
side
depicts
the
architecture
of
a
system.
I
think
it's
probably
inspired
by
flink,
where
you
on
the
bottom
here
you
you
see
the
the
notes,
so
these
this.
L
What
is
called
task
manager
here,
so
this
could
be
something
like
a
a
compute
node
in
your
network,
which
is
offering
several
slots
for
computation,
and
this,
like
the
job
manager
on
the
top
right
here,
is
kind
of
orchestrating
this
whole
system.
So
it's
kind
of
having
an
overview
about
the
available
slots
and
is
responsible
for
allocating
tasks
for
for
certain
jobs
and
then
also
managing
the
connectivity
between
those
those
jobs.
L
So
fundamentally,
this
is
not
so
easy
because,
for
example,
flink
is
really
using
a
connection
based
approach
to
connect
these
task
managers,
and
so
it's
not
going
into
the
tasks
but
really
the
the
nodes
if
you
like,
and
so
these
are
like
tunnels,
and
so
the
flink
task
manager
is
basically
tunneling
all
the
or
configuring
the
tunneling
of
all
the
task
communication
inside
these
connections
and,
of
course
this
needs
some
some
yeah
control,
so
like
credit
based
schemes,
for
example,
to
reduce
buffer
load
or
our
queue
sizes,
and
so
on.
L
So
fundamentally,
this
is
not
a
trivial
task,
so
you've
been
looking
at
this.
We
find
that
well,
these
overlays
do
not
match
the
inherent
logic
of
like
processing
immutable
data
objects.
Very
well.
L
So
as
I
presented,
the
data
is
really
locked
into
connections
and
these
are
like
virtual
channels
between
hosts
and
you
always
need
this
orchestrator
checking
the
resources
and
making
the
task
relationships
and
so
on,
and
so
you
treat
the
network
as
a
black
box
and
then
you
tunnel,
the
the
task
communication
inside
virtual
channels,
and
this
makes
it
yeah
difficult
to
have
a
really
agile.
L
You
know
matching
of
like
your
compute
performance
with
the
network
performance
and
having
also
really
responsive
systems.
In
the
end,
you
don't
you
have
you,
don't
have
this
full
visibility
of
both
the
computing
and
the
networking
resources
now
in
the
system.
I
wanted
to
talk
about
today,
and
so
we
call
this
ice
flow
information
senior
data
flow.
We
assume
that
we
have
a
network
of
nodes
and
in
in
icn
we
we
name
everything
so
the
assumption
here.
L
That
is
that
we
have
a
network
of
named
nodes
and
there
would
be
some
routing
infrastructure
that
allows
us
to
discover
them
and
forward
forward
interest,
packets
and
data
packets
in
the
system,
and
so
on
top
of
this
of
these
nodes,
we
would
instantiate
functions
also
with
a
certain
naming
convention.
L
They
would
also
be
announced
in
this
routing
system
and
so
we'll
be
able
to
construct
compute
graphs,
and
what
we
do
in
this
system
here
is
that
we
are
kind
of
not
establishing
connections
to
functions
or
to
to
nodes.
We
are
actually
just
asking
for
input
data,
and
so
when
there
is
new
input
data,
we
this
triggers
computation
at
the
like,
downstream
function
and
so
on.
L
In
this
system
we
are
able
to,
for
example,
you
know,
split
up
the
computation,
as
I
showed
before,
but
also
have
something
like
like,
like
a
multicast
system,
where
you
can
have
reuse
the
same
data
item
in
like
icn
idiomatic
way
quite
efficiently,
so
in
ice
flows,
we
just
talk
about
names,
for
the
infrastructure
and
for
for
the
actors
in
the
system
and
the
computation
of
the
actors.
L
They
return
name.
Data
object
with
the
usual
icn
properties,
so
they
are
immutable,
they
can
be
cached
and
they
can
be
authenticated
encrypted
and
so
on,
and
so
the
interesting
challenge
is
here
is
that
we
have
asynchronous
data
production.
So
we
have
to
know
when
data
is
available
so
like
push
semantics,
which
is
typically
not
idiomatic
in
icn,
and
then
you
have
to
think
about
flow
control.
So
how
do
you
couple
consumers
and
producers
and
then
in
the
icn
system?
Typically,
you
you
publish
data.
That
means
you
make
data
available.
L
So
one
challenge
here
is
that
you
also
have
to
know
when
the
data
has
been
consumed,
which
normally
you
you
don't
really
know,
because
requests
can
be
answered
by
caches
and
and
so
on.
So
there
needs
to
be
a
system
like
basically
implementing
a
bit
of
tighter
coupling
as
well.
L
So
the
system
yeah
has
a
certain
naming
convention
where
we
named
the
application,
the
data
flow
actors
and
then
the
produce
data
objects
that
the
functions
produce
and
with
that
we
are
able
to
to
set
up
the
system,
and
so
this
this
tasks
of
you
know
making
data
available
and
learning
about
new
data.
L
We
are
using
an
icn
technology
called
dataset
synchronization
for
that,
where
so,
logically,
the
producers
produce
data
under
a
known,
prefix
and
consumers
can
subscribe
to
that
prefix
and
then
they
would
learn
when
there
is
new
data
under
that
prefix,
and
then
they
can
decide.
Okay,
I'm
interested
in
text
to
lines
object
one,
and
then
I
can
fetch
that.
L
So
there
are
implementations
of
this
concept
like
psync
in
in
icn,
which
in
the
end
which
you
may
have
heard
about,
and
so
that
means
so
in
the
on
the
like
very
low
layer.
This.
That
means
consumers
have
to
kind
of
send
update
interests,
to
learn
about
new
names
perfectly
and
then
from
an
application
perspective.
L
It's
quite
a
convenient
interface.
So
it's
a
bit
like
reactive
programming,
so
you
just
get
notified
when
something
you
know
shows
up
that
you
are
interested
in.
L
L
So
we
have
a
grouping
concept
where
we
group
data
objects
into
windows
and
we're
actually
just
publishing
these
windows
so
and
so
we're
using
icn
manifests
for
that,
and
so
there's
like
a
like
a
a
two
level
inter
indirection
scheme
scheme
here
that
makes
that
makes
the
whole
system
more
efficient
and
a
bit
more
scalable,
I'm
almost
done,
and
so
in
addition
to
the
data
flow
communication,
we
also
need
to
share
some.
L
You
know
runtime
information
and
configuration
information.
So
what
is
the
static
flow
graph?
What
is
the
actual
dynamic
flow
graph?
What
are
available,
compute
slots
in
the
system
and
so
on,
but
also
implementing
this
loose
coupling
between
consumers
and
producers?
So
we
have
conceived
something
that
like
what
we
call
consumer
reports
where,
basically,
we
publish
what
windows
have
been
processed
by
each
consumer,
and
so
this
is
also
a
like
a
organized
data
structure
in
in
icn
way,
and
we
also
use
this
data
set
synchronization
scheme
to
share
this
information.
L
Okay,
just
very
quickly
so
this
approach
allows
us
to
deal
with
congestion,
control
and
yeah
like
a
proper,
receive
window
configuration
in
a
different
way,
so
we
can
really
adapt
like
the
interest
rate,
for
example,
to
our
actual
processing
speed,
so
avoiding
that
we
asked
for
too
much
data.
We
cannot
process
in
in
real
time
and
observing
the
performance
of
my
downstream
consumers
could
be
a
trigger
for
an
upstream
producer
to
initiate
scaling
out
and
so
by
creating
a
new
subgraph.
So
if
I'm
constantly
realizing
that
my
downstream
cannot
keep
up.
L
L
L
So
this
data
sensing
synchronization
approach
works
reasonably
well,
but,
to
be
honest,
we
are
currently
looking
a
lot
into
performance
optimizations
for
that.
So
there's
a
lot
to
do
in
reducing
the
overhead
and
so
on.
L
This
system
would
basically
need
additional
infrastructure,
so
like
a
name-based
routing
infrastructure,
for
example-
and
there
are
solutions
for
that
that
in
principally
principal
work,
but
maybe
in
terms
of
research,
this
system
could
also
be
supported,
perhaps
better
by
a
routing
system
that
gives
you
more
information
so,
for
example,
resource
education,
information
directly
and
not
only
reachability
information
and
so
for
coin.
We
think
this
could
be
an
example
for
like
new
protocol
work,
so
I'm
not
saying
that
this
is
the
the
best
way
of
doing
this.
L
So
this
is
has
to
be
something
that
you
know
ends
up
in
a
protocol
spec.
But
it's
it's
an
interesting
example,
so
how
you
can
break
up
overlays
and
leverage
systems
like
icn
to
do
that,
and
so
today
I
talked
about
data
flow,
but
you
can
imagine
that
other
interaction
classes
could
be
promising
as
well,
so
other
systems
may
be
like
kafka
like
published
broker
systems,
and
so
on
with
that,
thanks
for
your
attention
are
there
any
questions.
A
Thank
you
so
much
dirk.
That
was
super
interesting
and
I
just
want
to
point
people
to
there's
the
you
know
icn
paper
and
demo,
and
I
also
wanted
to
kind
of
go
back
to
one
presentation
and
remind
people
that
his
presentation
and
paper
are
available
from
nsdi21.
A
So
I
think
in
the
interest
of
time
we
will
go
forward
in
the
program
we
had
segmented
the
program
into
you
know,
papers
that
have
been
published
elsewhere
and
that's
been
really
fruitful.
Thank
you
all
for
your
presentations.
The
next
section
is
gonna,
be
brief.
It's
sort
of
new
ideas
section
and
then
we're
gonna
get
to
the
drafts
and
draft
updates,
and
then
then
a
new
draft.
B
A
B
A
A
B
B
Yeah,
can
you
okay,
you
seem
to
be
better
at
this
than
me.
I
I'm
sorry,
I'm
sorry
I
I
know
there's
a
way
of
doing
only
the
part
of
the
slide,
but
I
can't
find
the
button.
B
Maybe
if
I
could,
maybe
if
I
close
my
oh
my
no,
I
can't
close
all
my
my
screens
yeah.
You
should
try
it.
I'm
sorry
people
I
can
start
talking
anyway,
so
that
we
don't
spend
too
much
time.
The
presentation
is
about
moda,
it's
it's
a
currently
a
a
proposal.
B
That's
been
put
together
by
a
large
group
of
people
and
it's
a
european
wide
project
and
the
idea
started
from
essentially
a
lot
of
us
actually
a
lot
of
people
on
the
moda
team
or
will
be
when
we
once
we
see
the
slides
are
very
familiar
to
this
group,
because
a
lot
of
them
are
involved
and
one
of
the
ideas,
oh,
my
slides,
are
coming.
Thank
you.
Thank
you
so
much
eve
and
the
the
id
and
we
can
go
directly
to
the
first
slide.
A
It
in
presentation,
yeah.
B
In
mode
yeah,
so
the
idea
started
from
this.
A
lot
of
us
who
work
in
iot
have
a
big
problem
is,
and
I
think
it
was
interesting
that
a
little
bit
of
that,
the
scot
presentation
touched
it
a
little
bit.
Everything
is
very
verticalized.
B
Everything
is
fragmented
and
for
essentially
in
iot.
What
people
do
is
they
put
a
sensor
somewhere?
They
put
a
gateway
in
they
connect
to
the
cloud,
and
they
claim
that
the
problem
is
solved.
The
problem
is
not
solved
because
the
minute
that
you
want
to
start
having
applications
and
services
and
artificial
intelligence
that
cover
more
than
one
thing
you
you're
in
you're,
in
your
agriculture.
Well,
you
may
want
to
take
decisions
that
are
based
on
the
market,
but
then
how
does
that
work?
When
you
need
to
det?
B
You
need
for
that
to
know
or
how
much
disease
do
you
have
in
your
farm
or
maybe
what's
the
building
temperature,
because
that
will
increase
or
decrease
depending
on
you
know
the
time
of
year
or
the
time
of
day,
and
it
will
increa
increase
or
decrease
your
production,
which
yeah
will
go
back
to
your
your
modeling
of
decision,
and
now
you
would
the
minute
you
do
that.
Well,
you
have
people
involved.
You
have
a
ton
of
different
systems
that
don't
talk
to
one
another.
Next.
B
So
oops
so
essentially
in
this
fragmented
environment,
the
application
development
has
very
very
important
pain
points.
So
all
these
fragmented
systems
that
require
overlays
multiple
gateways,
different
cloud
applications,
the
different
cloud
providers
that
don't
talk
to
one
another
and
there's
issues
always
of
that
in
security
and
privacy,
because
there's
data
privacy,
there's
digital
sovereignty,
there's
multiple
customers
who
are
involved
and
it
creates
a
big
problem.
B
At
the
same
time,
in
this
group
coin,
we
started
to
think
that
the
internet
paradigm
is
much
more
than
is
much
more
like
a
computer
board
than
a
telephone
network,
and
if
we
have
a
computer
board,
then
we
need
an
operating
system
and
the
extra
cloud
view
that
we
have
in
moda,
where
there's
computing
inside
different
nodes
that
actually
collaborate
with
one
another
at
different
levels
in
the
network
is
one
realization
of
this
vision
and
eve
you're,
the
one
who
said
that
the
data
is
the
fuel
of
the
21st
century
and
in
such
an
environment
also
for
iot
and
for
all
these
distributed
systems.
B
Data
valorization
is
key.
I
think
scott
mentioned
that
you
know
you.
You
use
open
source
approaches,
and
this
is
exactly
what
we
want
to
do,
because
in
fact
it
is
not
the
algorithms
or
the
software
that's
worth
and
the
words
a
lot.
It's
actually
the
data
itself,
and
so
we
want
to
make
sure
that
we
can
actually
maximize
data
valorization
by
acting
it
on
it
inside
the
network.
Next
slide.
B
B
It
provides
an
infrastructure
that
allows
applications
to
be
easily
developed,
and
it
has
a
lot
of
things
that
were
discussed
in
in
coin
and
almost
we
have
almost
one
one
draft
per
each.
Each
topics
here,
discovery,
services,
communications
and
publish
subscribe.
B
B
The
implementation
of
common
use,
functionalities
that
we
need,
including
you
know,
forwarding
and
and
forwarding
to
different
cpus
and
forwarding
not
across
just
layer
three
but
for
functional
french
implementation
of
forwarding,
based
on
name
and
so
there's
a
link
there
to
icn
and
ndn,
and
also
obviously
in
moda
what
we
want
to
have
going
back
to
the
pain
points
we
would
like
to
have
apis
and
tools
for
writing
class
across,
though,
to
running
code
across
those
multiple
terrorists.
B
Oh
my
god.
Here's
a
typo
heterogeneous
nodes
inside
the
network
next
slide.
B
So
the
format
didn't
work
very
well
here,
the
main
mode
of
functionality.
I
don't
want
to
go
I'll,
go
fast,
because
I
don't
want
to
go
into
everything
this,
but
obviously
we
want
to
do
some
orchestration.
There
was
discussion
about
orchestration
before,
but
we
feel
that
we
can
actually
look
into
this
and
again
on
device
computing
over
a
trojan
system
we
want
to
have
if
we
can
as
much
as
possible
reusability
and
that's
actually
a
problem
on
those
verticalized
applications
is
if
you
change
vendor
or
if
you
change
supplier.
B
Your
system
doesn't
work
anymore
because
they
use
different
protocols,
different
semantics,
different
everything
you
would
like
to
have
modularity
as
a
design
choice.
I
usually
when
I
teach
my
class
on
distributed
systems.
I
talk
about
the
lego,
blue,
brick
approach,
so
that
you
know
we
want
to
have
a
lot
of
lego
bricks
and
we
connect
them
when
we
need
them
and
we
don't
use
them
when
we
can't
or
when
we
we
won't
and
actually
also
in
iot
systems.
It's
important
because
a
lot
of
times
you
have
limited
resources.
B
You
want
to
be
able
to
manage
the
the
network
processing
units
themselves,
so
the
in-network
computation
that
allows
the
packets
and
the
information.
I
would
think
I
would
speak
more
of
information
in
this
case
to
be
properly
managed
and
properly
sent
to
the
right
interfaces
and
the
right
end,
devices
and
end
system,
and
obviously
we
want
to
support
data
and
intelligence
services.
More
and
more
data,
driven
until
it
is,
is
actually
the
basic
of
a
lot
of
networking.
B
Last
year
with
andy
schuster,
we
did
a
study
for
the
nsf
on
the
future
of
broadband,
and
actually
the
conclusion
was
that
everybody
who
does
broadband
wants
data,
so
data
driven
made
required
to
have
new
new
features
inside
moda
and
inside
the
network
itself
and,
of
course,
there's
there's
ai.
That
is
not
only
in
the
applications
but
more
and
more
in
the
network
in
terms
of
network
management
and
even
management
of
loads
inside
data
centers
and
elsewhere.
B
Next
slide.
I
think
it's
almost
the
last
one
yeah.
I
think
the
previous
slide
is
an
overview
of
of
what
it
looks
like,
but
the
picture
is
not
very
good,
so,
let's
we
can
skip
it
and
the
link
to
the
coin
rg.
Well,
actually,
again,
it's
not
just
that.
There's
a
lot
of
people
involved
in
mode
that
are
also
members
of
this
community
is
actually
there's
a
lot
of
common
research
topics.
I
mentioned
discovery.
B
We
would
like
to
discover
storage
function
from
you,
know,
functional
functions
and
and
computation
so
functional
discovery,
storage,
discovery,
computation
discovery.
We
would
like
more
and
more.
I
think
both
and
that's
actually,
since
I
know
I
have
two
hats
here,
so
I
can
say
as
I
as
now
as
maybe
as
a
current
chair,
that
distributed
distractions
and
protocols
are
also
very
important
to
both
groups.
B
They
centralize
security
and
trust,
obviously
important,
and
there
is
going
to
be
a
draft
update
later
in
this
talk
about
security,
but
it's
also
an
important
thing
when
you
start
distributed
systems
federated
learning.
B
We
haven't
really
addressed
that
a
lot
in
coin,
but
it
is
something
that
we
could
look
at,
because
federated
learning
needs
some
form
of
in-network,
maybe
not
pure
computation,
but
at
least
some
form
of
trending
and
and
connections
and
orchestration,
and
obviously
there's
all
the
use
cases,
and
actually
the
people
right
after
me
will
talk
about
use
cases,
so
I
won't
go
there
so
this
was
this
was
a
very
short
presentation.
This
is
moda.
B
We
know
that
this
is
right
now
being
proposed,
we've
been
evaluated,
we
don't
know
if
it's
going
to
go
through,
but
I
think
the
team
and
I'd
like
to
really
acknowledge
the
architecture
team
that
did
this
and
a
few
of
them
are
on
the
call
right
now.
This
was
an
incredible
effort.
This
was
an
incredible
fun
thing
to
do
and
we're
thinking
that
maybe
we
want
to
continue
the
work
inside
this
community
and
outside.
Thank
you.
A
Thank
you
so
much
marie
jose.
There
is
a
question
on
the
chat
asking
if
there
are
any
pointers
that
you
or
others
can
share,
that
would
help
people
read
more
about
or
if
there's
a
website,
for
example.
So
in
the
interest
of
trying.
B
I
will
yeah
I
will.
I
will
see
what
we
can
share.
I
I
see
this
is
from
from
daniel
king.
I
will
see
where
we
can
share
I'll.
Ask
the
the
the
proposal
management,
which
I
was
not
one
of
so
thank
you
daniel
for
this
question
and
I
will
ask
thank
you.
A
Okay,
it
looks
like
the
use
cases.
Presentation
is
next
we're
into
the
some
draft
updates
and
I'm
going
to
ask.
If
folks
could
please
shorten
their
presentations
to
eight
minutes
each
just
to
kind
of
reclaim
a
little
bit
of.
A
And
actually
you
know
what
I
I
see
that
I
did
take
the
advice
of
those
on
the
chat
window
to
load
things
into
mid
echo,
which
I
hadn't
done
before.
So
I
admit
user
error
on
my
part.
It
says:
okay,
so
do
you
know
how
to
request
the
slides
looks
like
you've.
M
Yeah
so
hi
everyone,
I'm
ike,
and
I'm
here
on
behalf
of
all
the
co-authors
now
for
the
use
cases
draft
just
giving
you
a
bit
of
update
what
we've
done
since
the
last
iteration.
M
M
So
until
now,
it
was
rather
loose
collection
of
use
cases
so
stemming
from
the
industrial
ones
right
in
the
beginning
and
yeah
now,
we've
quite
a
few
of
them,
and
what
we're
now
currently
trying
to
do
is
actually
go
more
towards
providing
input
to
the
second
charter
item
that
we
have
there
changes
in
a
nutshell,
since
the
last
iteration.
M
So,
first
of
all,
we
have
regrouped
the
use
cases
so
from
the
historically
grown
structure
to
the
one
that
we
have
right
now,
so
we
thought
of
four
ways
how
we
could
think
about
the
use
cases,
the
first
one
and
our
really
kind
of
new
user
experiences
that
can
be
enabled
using
coin.
So
the
arvr
stuff
that
marie
jose
has
presented
quite
a
few
times
is,
for
example,
in
that
category.
M
M
The
third
category
is
on
improving
existing
coin
capabilities,
so
we
already
have
networks
which
use
some
form
of
coin
and
so,
for
example,
cdns
have
kind
of
already
some
coin
functionality
in
there
and
those
use
cases
are
now
really
on
how
those
networks
can
even
be
improved
further
and
then.
Finally,
we
have
entirely
new
coin
capabilities
that
can
be
enabled.
M
As
a
second
aspect,
we've
then
also
tried
to
sharpen
and
tighten
the
taxonomy,
so
it
was
not
that
focused
before,
and
we've
tried
to
do
that
now.
I'll
come
to
that
in
a
moment
again,
and
finally,
we've
also
already
started
to
prepare
the
actual
analysis
that
we
want
to
do
later,
mainly
focusing
on
the
research
questions
and
requirements.
M
So
with
that,
let's
have
a
look
at
the
current
draft
structure.
So
what
you
can
see
is
that
we
have
actually
done
the
regrouping
already.
We
also
have
this
one
new
use
case
provided
by
xavier
that
he
has
already
presented
last
time
or
in
our
last
meeting
and
then
also
we
have
already
the
analysis
in
the
structure
right
now,
so
we're
all
only
waiting
to
get
started
with
that,
basically
and
then.
M
So
everything
in
green
that
you
see
on
this
slide
is
terminology
defined
in
this
draft
by
dirk
and
and
the
other
authors,
and
we've
tried
to
use
that
terminology
throughout
our
draft
as
well
or
actually
at
least
have
started
to
do
that,
and
we've
then
also
added
quite
a
few
additional
terminology
that
we
thought
might
be
necessary
for
our
for
our
draft
and
overall,
our
idea
would
be
to
sort
of
get
an
an
overall
technology
all
across
coin.
M
Where,
then,
the
question
obviously
would
be
where
would
actually
place
that
terminology,
so
whether
we
have
that
in
every
draft
or
whether
we
have
one
draft
that
collects
all
of
that
terminology.
M
M
We've
also
tried
to
link
the
description
of
the
different
use
cases
really
to
the
new
grouping
that
we
have
introduced
into
the
draft,
and
finally,
we've
also
tried
to
focus
the
requirements
a
bit
more
really
only
on
the
coin
capabilities
so
that
we
really
describe
or
try
to
describe
what
the
coin
capabilities
themselves
need
to
do,
rather
than
what
the
overall
use
case
has
to
satisfy
and
with
that
I'm
already
at
the
end.
M
So
next
steps
that
we
are
planning
to
do
is
first
finish,
aligning
the
use
cases
according
to
the
the
new
taxonomy,
so
really
pinpointing
the
requirements
and
aligning
the
descriptions
a
bit
better.
Then
we're
also
trying
to
think
about
the
terminology
that
we're
currently
using,
whether
that's
everything
that
we
need
or
whether
there
are
additional
terms
that
we
might
require,
and
then
also
maybe
as
a
question
to
the
the
complete
research
group,
where
we
would
like
to
collect
that
terminology.
M
As
the
draft
by
and
and
york,
I
think
has
been
expired,
some
time
and
yeah,
that's
just
a
matter
of
things
where
we
collect
that
and
then.
M
Finally,
we
would
also
like
to
start
with
the
analysis,
so
we
first
try
to
condense
the
opportunities,
research,
questions
and
requirements
and
then
afterwards
try
to
identify
aspects
similar
across
all
of
the
use
cases,
so
that
we
can
then
perhaps
give
a
collected
input
into
the
second
charter
item
of
this
research
group,
and
that's
it
thank
you
and
I
think
comments
we
can
take
them
to
the
list.
If
there's
not
that
much
time.
B
Thank
you.
I,
I
cannot
really
I'm
a
co-author,
so
it's
hard
for
me
to
be
both
co-author
and
and
and
share
on
this,
but
I
think
there
will
be
a
discussion
amongst
the
chairs
to
to
bring
this
as
an
orgy
document,
but
we'll
I'll.
Let
the
other
chairs
talk
about
it.
Since
I'm
an
author.
G
H
I
Very
good
still
a
bit
new
with
the
ui
and
colin
has
given
a
good
tutorial.
Thank
you.
Transfer
protocol
issues,
even
network
computing
systems.
That's
another
draft,
like
clouds
and
myself
have
been
working
on.
It
had
expired
initially
in
in
the
last
meeting,
because
of
the
focus
we
had
mainly
on
research,
presentations
and
and
and
and
it's
not
quite
as
outstanding
outdated
as
the
date
on
the
first
slide
gives.
It
was
obviously
presented
at
the
11th
of
november
2021
or
not
2020.,
the
the
premise
I
highlighted
the
main
piece.
I
I
don't
intend
to
read
this,
so
this
is
about
what,
if
you
have
functions
of
the
communication
system,
I'm
sorry
if
the
part
of
the
intent
function
may
be
provided
as
part
of
a
communication
system.
That's
the
network
premise
really
of
coin.
This
is
a
quote
from
the
end-to-end
paper
from
the
original
indian
paper
and
and
we're
looking
at
the
challenges
to
the
traditional
intent
transport
protocols.
You
know
trying
to
outline
opportunities,
research,
questions
for
the
design
of
transport
protocols
that
may
arise
from
the
availability
of
in-network
computing
capabilities.
I
That's
the
the
premise
really
as
a
recap.
So
the
intention
is
to
provide
insights
into
various
transport
technology
areas.
Research,
question
changes
already
mentioned
ongoing
efforts
and
concepts
that
are
currently
under
studies,
so
we're
not
limiting
ourselves
only
to
existing
rfcs,
but
we're
also
looking
at
ongoing
work
research,
whatever
we
can
find
outline
possible
future
work
in
coin
and
elsewhere,
that's
starting
with
coin,
but
doesn't
necessarily
mean
that
all
of
the
work
needs
to
be
done
in
coin,
obviously,
and
provide
that
in
a
gap
analysis.
I
So
the
goal
is
to
contribute
to
the
objectives
of
coin,
and
I
mentioned
I
I
quoted
here
the
coin
chart
or
scope
number
four,
namely
the
research
and
potential
new
transport
protocol
required
or
enabled
by
in-network
commute
resources.
So
that's
a
dedicated
item.
That's
chartered
really,
and
we
believe
that
and
hope
that
the
draft
contributes
to
that
the
general
structure
we
made
a
similar
changes
a
bit.
You
know
aligned
in
some
sort
of
locks
up
to
to
the
changes
we
also
made
in
the
in
the
use
case.
I
Traffic
presenters
are
trying
to
sort
this
a
bit
more
into
various
areas
of
discussion.
So
in
this
case
for
the
transport
draft,
we
divided
the
or
collected
the
separate
chapters
from
before
into
a
single
technology
areas
section
now
that's
a
section
three,
so
that
has
been
entirely
restructured
and
everything
has
been
fl
was
was
was
integrated
under
that
section
and
then
started
with
the
gap
analysis.
I
This
is
just
to
present
the
technology
areas
and
the
gap
analysis
more
visibly
in
the
structure,
so
there's
edit
here
for
future
extensions
and
in
particular
with
a
possible
formation
of
future
needed
work.
So
this
is
much
more
gearing
the
the
I'm,
the
document
towards
the
charter
item
that
I
mentioned
before
right.
I
So
in
section
three,
so
everything
that
really
was
before
in
the
various
I
think
section
initially
three
to
seven
are
now
all
found
here
and-
and
that
also
helps
us,
then
to
link
back
from
the
gap
analysis
in
section
five
into
the
appropriate
technology
section
later
on.
So
it
makes
the
editing
easier.
I
There
were
smaller
updates
here
only
apart
from
the
larger
structural
change
in
section
3.1.
By
linking
this
to
the
ongoing
in
the.
I
Addressing
but
there's
a
discussion
going
on
at
the
moment
and
the
section
3
also
added
ongoing
work
in
the
ic
energy
and
then
the
working
group,
so
these
were
content
changes
as
well
to
the
structural
changes
question,
obviously
to
the
group.
We
would
like
to
ask
here
and
happy
to
get
any
ideas
either
in
the
chat
or
on
the
mailing
list
is
what
research
question
related
concept
ongoing
efforts?
I
Our
intention
is
really
to
you
know
also
approach
the
new
community
with
that
question
specifically
and
because
we're
sure
that
we're
missing
concepts
simply
because
we're
not
aware
of
them
or
we,
you
know-
haven't
worked
through
all
the
various
ones.
So
if
you
have
anything
any
research
question
in
a
related
concept,
you
want
to
share
with
us.
Please
do
so
gap
analysis.
I
I
said
we
edit
this
with
the
same
subsection
structure,
so
you
will
see
that
the
subsection
structure
in
section
three
is
mirrored
in
order
to
later
really
have
a
clear
relation
between
technology
areas,
the
the
the
the
seven
subsection
we
have
to
the
actual
gap
analysis
in
in
these
various
areas.
I
At
the
moment,
we
only
added
an
introductory
paragraph
outline
the
intentions.
We
have
no
content
in
the
individual
sections,
yet
question,
obviously
also
to
the
list
is
or
to
the
community
is
to
be
one
such
such
section
or
not
for
coin,
so
we
believe
that
it
may
help
us
to
working
towards
the
charter.
If
we
don't
want
this,
we
can
remove
it,
but
that
was
the
suggestion
we
made
and
that's
a
question.
We
would
like
to
get
some
feeling
from
the
community.
I
Future
plans,
so
we
mentioned
already
the
last
time
and
we
we
haven't
really
made
too
many
changes
to
do
that,
but
we
want
to
because
we
also
made
changes
to
the
use
cases,
and
I
think
now
that
we
structure
the
use
cases,
as
you
could
see
in
ike's
presentation
more
clearly
around
the
notion
of
capabilities.
We
want
to
also
make
clearer
linkage
to
the
use
cases
and
particular
with
that
taxonomy
that's
being
used
now
in
the
use
case
draft
in
a
in
the
next
version
of
the
transport
draft.
I
We
want
to
add
more
existing
work.
We've
already.
There
was
a
communication
between
the
chairs,
but
unfortunately
we're
a
bit
too
late
to
bring
this
to
the
chairs
work
that
has
been
presented
at
hotnet's,
2021
and
bharaji.
The
one
of
the
co-authors
of
that
piper
is
actually,
I
hope,
he's
still
online,
but
at
least
he
was
online
before
so.
We
tried
to
bring
more
existing
work
here,
even
to
the
point
that
we
may
also,
you
know
have
this
work
that
we
find
present
to
the
wider
community.
I
As
I
said,
we
were
a
bit
late
for
the
agenda
for
this
time,
so
it
may
come
in
the
january
meeting,
also
similar
to
the
use
case
draft
to
turn
the
research
questions,
maybe
at
a
later
stage
into
some
sort
of
requirements,
language
that
would
allow
us
to
you
know,
formulate
research
work
around
these
requirements.
I
Gab
analysis
to
fill
this
I
mean
if
the
question
I
asked
before
is
answered
positively,
so
we
would
like
to
have
a
gab
in
ours,
because
it's
useful
to
have
that
we
really
would
need
help
here,
would
invite
contributors
to
that
and
and
and
with
the
view
on
the
charter.
The
question
here
that
was
the
bit
of
the
confusion
I
had
wanted
to
do
because
I
thought
the
use
case
craft
was
an
rg
document
already.
I
I
think
it's
the
transport
draft
that
isn't
an
rg
document
yet,
which
is
why
it's
written
on
here
to
maybe
adopt
that
as
a
potential
key
output
towards
the
scope
of
the
charter
as
an
as
an
rg
draft
as
well
that,
at
least
that
was
the
memory
that
I
had
about
this
the
issue
and
again
to
repeat,
we
would
really
welcome
contributors.
I
Gab
analysis
related
work,
new
work
that
exists,
even
if
you
just
send
us
links
if
you
can't
be
bothered
to
get
it
as
a
course
or
into
properly
written
up
we're
happy
to
get
any
material
that
you'll
find
that
you
think
is
relevant
should
be
listed
in
the
in
the
draft.
Please
do
contact
us
to
do
that
if
you
want
to
become
a
bit
more
active
in
helping
us
absolutely
welcome,
certainly
wouldn't
that
would
want
that.
A
Dirk,
I
think,
you're
spot
on
in
terms
of
what
the
future
and
next
steps
are.
You
have
a
question
from
peng
liu,
so
pain.
If
you'd
like
to
jump
in.
N
I
Yes
yeah,
so
so
the
the
linkage
to
diecast
is
still
in
the
draft,
even
though
I
think
the
draft
actually
have
expired,
and
there
are
aspects
that
are
mentioned
in
the
dyncast
work
that
I
think
would
need
to
be
pulled
more
properly
into
the
draft.
It's
correct,
so
it's
a
little
bit
a
loose
issue
at
the
moment
of
the
draft
it
needs
to
be.
It
needs
to
be
clarified.
Yeah.
G
Yeah,
I
was
just
asking
I'm
seeing
answer
it
also
in
the
chat
but
question
about.
If
there
was
a
github
repo
that
we
could
contribute
to,
especially
in
the
gap
analysis.
I
Yes,
absolutely
we
we
ike
thanks
for
putting
the
the
get
up
into
into
the
chat.
I
I'd
seen
that
I
couldn't
type
quickly
enough.
Yes,
I
I
would
really
welcome
and-
and
that's
certainly
a
way
to
contribute
to
that,
we
do
the
same
for
the
use
case
draft
as
a
separate
one
for
you,
yeah.
G
We've,
that's
that's,
really
improved
us
to
get
get
input
from
contributors
in
other
venues,
so
thank
you.
A
Thanks
so
much
dirk
really
great
progress,
I
in
the
interest
of
time
we're
going
to
move
forward
in
the
program.
Let's
see,
we
have
the
enhancing
security
and
privacy
draft
and
if
again,
if
you
could
condense
your
talk,
I
know
we've
given
you
ten
minutes
originally,
but
if
you
could
cut
it
down
to
seven
or
eight,
that
would
be
wonderful.
Thank
you
very
much.
N
O
We
do
okay,
great,
so
yeah.
Thank
you.
You
didn't
hear
from
me
in
a
while,
but
we
still
have
the
draft
on
enhancing
security
and
privacy
with
in-network
computing
ongoing,
and
actually
a
lot
has
been
happening
in
the
past
month
and
year
in
this
field
of
research.
O
O
H
J
A
Hear
us
we're
having
terrible
difficulty
hearing
you
you're
completely
broken
up.
We
don't!
Oh,
that
sounded
fine.
I
wonder
if
it's
your
headphone,
usually.
A
A
Video
to
reclaim
some
bandwidth,
but
could
you.
A
Okay,
let
me
hear
you
now,
but
now
you're,
showing
that
we,
I
think
we
got
about
one
or
two
yeah.
If
you
could
go
back
yeah
well,
okay,
so
you
didn't
hear
anything,
that's
that
I
think
we
got.
We
were
on
your
first
slide.
We
looking.
K
K
O
Okay,
to
make
it
quick,
the
focus
is
on
implementing
security
and
privacy
mechanisms
within
the
network.
Yeah
for
better
performance
and
faster
reaction
and
use
cases
are,
for
example,
retrofitting
security
forces.
A
Okay,
I
know
I
know
I
think
we're
going
to
we're
going
to
need
to
defer
your
presentation
every
time
you
we
we
speak.
It's
fine,
then,
as
you
present,
the
audio
is
really
okay.
A
Sadly,
I'm
sorry.
A
A
A
Okay,
let
me
let
me
download,
and
let's
see
where
do
I
go
to
get
the
okay,
so
I've
uploaded
everything
to
manage
the
slides
and
then,
if
I
want
to
show
this.
P
Button
on
the
left
hand
side
near
the
near
where
you
mute
and
unmute
on
the
left.
O
You
you
again
missed
everything.
A
O
Oh
yes,
okay,
so
yeah
very
quickly.
There
is
work
which
tries
to
implement
encryption,
also
secure
cryptographic,
functions
for
encryption
and
hashing
in
the
data
plane,
so
this
lays
the
foundation
for
security
and
privacy
applications
such
as
implementing
security
standards,
for
example,
ipsec
or
macseg
in
the
data
plane,
and
it
might
also
allow
for
onion
routing
or
message
authentication
in
the
future.
O
O
Other
work
is
about
allowing
scalable
transparent.
Oh
obviously,
yes,
no
problem
yeah,
so
our
work
is.
O
Yeah,
we
can
also
use
computing
in
the
network
to
improve
intrusion
detection.
One
one
potential
here
is
to
allow
for
inline
detection
and
a
quick
reaction
to
anomalies,
but
we
can
also
reduce
the
load
on
existing
intrusion
detection
systems,
and
this
is
exactly
what
lewis
and
I'll
do
so
here.
They
apply,
rule-based
pre-filtering
in
the
data
plane
and
by
that
they
achieve
a
traffic
reduction
of
up
to
75
percent
at
the
actual
ids.
O
Okay,
next
slide,
and
last
we
looked
into
network
monitoring,
which
can
be,
for
example,
used
for
network
forensics,
and
here
soundcheck.
I,
for
example,
propose
flow
monitoring
on
p4
based
hardware
switches,
so
they
pre-process
packets
in
the
data
plane
and
then
very
efficiently
create
floor
records
in
the
control
plane,
and
this
offers
a
high
performance
and
cost
efficient
alternative
for
accurate
flow
monitoring
in
comparison
to
existing
solutions.
O
Yeah
so
to
conclude,
we
see
increasing
interest
of
research
community
and
that
underlines
our
intuition
about
the
possibilities
that
coin
provides
for
security
and
privacy
and
also
its
relevance,
and
this
is
confirmed
by
recent
publications
which
match
the
ideas
of
our
draft
and
they're
published
at
high
rank
values.
For
example,
usenix
security
and
existing
work
also
provides
first
proofs
of
concept
which
show
that
implementing
security
in
the
network
is
actually
feasible.
O
A
Thank
you
so
much
really
sorry
about
yeah.
That
was
true.
The
network
itself
got
in
the
way
of
your
presentation,
but
thank
you
for
that.
Terrific
update
yeah
some
of
the
research.
That's
out
there
any
questions.
The
audience
in
our.
O
A
O
I
think
where
was
the
question
in
the
chat
before,
but
I
can
get
to
this
offline?
I
think
okay.
A
Okay,
that
sounds
good
great.
Then
I
will
stop
the
slideshare.
Thank
you
so
much
for
persevering
everybody
in
our
last
minute.
I
think
they're
really
two
topics.
A
One
is
just
that
if
you
have
don't
know
already
that
hot
nets
is
going
on
in
parallel
this
week
and
there's
some
very
interesting
work,
and
the
second
point
of
interest
is
that,
of
course,
we've
been
threatening
to
have
an
interim
in
order
to
review
our
scope,
and
I
think
we
really.
We
would
like
to
do
that
halfway
between
this
ietf
and
the
next
so
stay
tuned.
A
For
that
we
welcome
your
inputs
on
the
mailing
list
and
thank
you
to
so
many
of
you
for
sticking
it
out,
despite
the
network
errors
for
arriving
at
whatever
hour,
it
is
in
your
time
zone
many
of
those
time
zones
it's
an
inconvenient
hour.
So
thank
you
for
being
here
and
to
all
the
presenters
for
the
terrific
presentations.
A
I
know
there's
been
a
really
healthy
dialogue
in
the
chat
window,
so
I
look
forward
to
sharing
some
of
that.
We,
we
all
look
forward
to
sharing
some
of
that
in
the
minutes.
So,
thank
you
once
again
since
we're
at
the
top
of
the
hour
for
for
being
here
any
other
comments,
maurice
jose
or
jeffrey.
B
Okay,
I
think,
thank
you
very,
very
much
everyone
and
yes,
see
you
on
the
mailing
list
and
and
see
you
in
hopefully
in
person.