►
From YouTube: ICNRG Interim Meeting, 2020-12-01
Description
ICNRG Interim Meeting, 2020-12-01
A
A
B
No
dirk
has
not
cloned
himself,
so
the
second
instance
of
dirk
is
dave.
The
other
one
looks
much
better.
A
Yeah
I
had
to
do
this
for
other
itf
meetings.
I
I
had
to
attend
and
they
didn't
want
me
to
join.
As
I
see
energy
so.
A
B
So
why
was
the?
Why
was
there
a
webex
sent
out
with
with
a
different
webex
meeting?
That
was
the
one
that
was
in
my
calendar.
A
I
suspect
this
got
auto-generated,
I'm
not
sure,
or
maybe
we
just
had
to.
B
All
right-
well,
okay,
toro-san
and
yoki-san-
have
joined
here.
So
everybody
was
in.
The
other
meeting
is
now
back
here.
A
Okay,
let's
get
started
thanks
for
making
this.
We
know
that
everybody
has
a
ton
of
online
meetings
these
days.
So
we
appreciate
you
being
here:
yeah
welcome
to
ic
energy,
so
this
is
an
interim
meeting
after
ietf
109,
so
dave-
and
I
are
sharing
that
so
while
we
do
this
intro,
you
can
already
think
about
whether
you
would
be
able
to
help
with
taking
notes
today,
we're
still
looking
for
a
note
taker
and
it's
kind
of
important
okay.
A
So
this
is
the
online
participation
section
that
not
everybody
has
received.
Sorry
for
that.
Let's,
let's
do
the
q
a
using
this
protocol,
so
add
yourself
to
the
to
the
queue.
If
you
have
a
question
using
plus
q.
A
And
quickly
take
note
of
the
irtf
note
well,
so
this
has
to
do
with
ipr
and
so
by
participating
here
you
are,
you
agree
to
follow
the
irdf
processes,
and
this
is
mainly
about
notifying
the
community
quickly.
A
A
Please
take
note
of
that
and
yeah
adhere
to
these
code
of
conducts
and
then
again
as
usual.
We
are
here
in
the
internet
research
task
force,
so
we
are
generally
interested
on
longer
term
research
that
is
related
to
to
the
internet,
evolving
the
internet,
and
so
this
is
clearly
research,
not
standardization,
and
sometimes
this
is
irritating
because
we
are
using
similar
procedures
or
you
know,
concepts
on
drafts
and
rc's
just
be
aware
that
we're
not
doing
standards
here.
A
Okay,
if
you're
new
to
this,
this
is
our
general
infrastructure,
mailing
list,
wiki
page
and
yeah.
So
no,
I
hope
everybody
has
had
enough
time
to
think
about
volunteering
for
taking
notes.
C
A
Yeah
we
appreciate
it
thanks
a
lot.
It's
also
good
at
time
to
maybe
give
somebody
else
a
chance,
so
you
have
done
it
quite
a
lot.
Thank
you.
A
C
Okay,
let's
do
something
I'll
do
it,
but
I'll
listen
to
you're,
recording
this
right.
C
B
C
It
with
the
recording
over
the
weekend
or
something.
B
Into
coding
the
md
just
so
we
don't
miss
anything.
B
A
So
I
I
went
over
this
a
bit
quickly,
so
we
have
this
kodi
md
link
here
that
I
also
sent
around
and
you
can
find
it
on
the
data
tracker.
Please
subscribe
yeah,
and
so
we
also
using
this
for
tracking
attendance.
So
like
the
online
blue
sheet,
please
feel
free
to
add
your
name
to
the
list.
There
thanks.
A
A
And
so
this
is
the
agenda.
Let
me
make
myself
so
we
have
a
pretty
exciting
agenda
today,
so
many
cool
research
presentations,
so
just
asking
is
there
anything
you'd
like
to
change
any
last
minute
requests.
A
So,
regarding
the
timing,
you
can
see
it's
quite
ambitious,
so
you
don't
have
to
use
all
the
time
in
your
time
slot,
of
course,
but
I
mean,
since
this
is
research,
we
anticipate
discussion.
So
we
I
mean
we
are
not
constrained
by
the
usual.
You
know
room
bookings
or
anything
else,
okay,
just
very
quickly,
so
we
went
through
our.
A
Documents
in
the
group
and
so
here's
a
quick
update
just
of
the
ones
that
are
currently
nearing
completion
and
well
first.
Firstly,
let's
celebrate
the
publication
of
rc
8884
research
selections
for
using
icn
in
disaster
scenarios.
This
had
a
long
time
coming,
but
we
are
happy
that
it
finally
got
published
thanks
again
to
the
authors
and
everybody
who
helped
getting
this
done.
A
We
have
a
few
other
drafts
that
are
kind
of
cycling
around
the
our
group
and
the
rsv,
and
so
the
icn
lopen
draft
is
also
almost
finished.
So
I
think
we
are
just
waiting
for
the
ihd
to
agree
that
it's
ready
and
we
moved
on
so
this
had
some.
We
had
some
vision
cycles
already.
A
A
Now
the
these
two
nrs
documents,
they
also
had
a
few
cycles,
so
we
are
waiting
for
a
revision
for
requirements
and
conservation
is
currently
in
an
ir3
review
and
then
finally,
the
lte
4g
draft.
A
So
that
also
got
some
comments
in
the
isg
and
so
waiting
for
an
for
an
update
on
that
but
yeah.
So
this
has
been
really
good
progress,
so,
despite
all
the
challenges
in
this
year,
so
thanks
everybody
for
keeping
doing
this.
It's
great
to
see
this
is
all
moving
forward.
A
A
We
will
also
sort
on
updates
and
expect
some
updates
of
some
of
our
other
documents,
but
let's
discuss
them
on
the
list.
Unless
there's
anything
we
missed,
please.
Let
us
know:
okay,
good
okay.
So
with
that
we
can
start
our
technical
program
and
like
to
hand
it
over
to
christian
and
hey.
D
Hopefully
I
don't
lose,
then
my
view,
it's
always
a
little
bit
yeah.
This
is
webex.
You
will
use
your
webex
view.
So
do
you
see
the
slides
beautiful?
Yes,
okay,
good?
So,
let's
quickly
go
to
that,
it's
not
really
a
technical
presentation,
it's
more
kind
of
where
we
are
what
the
different
positions
or
views
are
a
quickly
a
timeline.
It
all
started
with
the
draft-
and
I
saw
chris-
is
also
on
the
to
the
base
meeting.
D
So
where
we
wrote
down
that
very
first
initial
version.
It
was
in
the
background
some
c
programming
which
sees
in
light
and
getting
some
experiences
on
how
you
program
that
then
later
mark
and
dave
joined.
They
have
stake
in
that
because
I
think
of
the
manifest,
of
course,
and
the
nameless
objects
needed
for
ccnx,
and
that
was
very
welcome
to
help
that
draft
evolve
and
in
spring
mark
did
some
programming
and
gave
another
push.
A
namespace
concept
came
into
that.
D
D
Where
we
had
a
lot
of
discussions,
I
think
it
started
more
or
less
with
mail
by
mark
then
jenk
joined
in
ken
and
then
a
lot
of
ping
pong
with
dave
mark
and
me
one
of
the
additional
things
where
that
concept
of
virtual
blob
suddenly
came
up,
and
it
was
interesting
to
see
the
discussions
that
really
it
there
is
some
work
still
to
be
done,
and
I
just
take
took
one
sample
of
of
a
statement.
Flick
describes
a
single
file.
D
No
by
definition,
flick
must
always
describe
two
files,
so
we
already
at
the
conceptual
level
or
terminology
level,
have
to
be
careful
what
we
are
doing
here
and
we
need
some
revisions
so
quickly
where
we
are
so
that
is
from
dave.
He
just
sent
me
that
there
is
a
disagreement
on
how
basic
or
how
complex,
how
full,
how
complete
the
spec
should
be
the
scope.
D
The
namespace
thing
is
mentioned
that
marker
came
up
with
and
metadata
capabilities,
so
you
can
easily
traverse
to
refine
things
easily
in
a
hash
tree
and
well
exactly
how
that
metadata
machinery
would
look.
That
is
exactly
something
to
discuss,
so
we
are
in
the
plane
of
that
discussion.
I
quickly
switched
to
mark's
assessment.
D
I
realized
that
he
made
a
contribution
in
september
options
for
flick,
name
constructors,
and
I
think
there
was
no
follow-up
on
that.
So
that
is
also
an
ending
discussion.
We
should
pick
up
on
the
flick
document
itself,
similar
like
dave.
We
need
name
constructors,
so
that
is
the
name
space
concept.
D
How
the
link
section
should
be
done.
That
needs
a
little
bit
discussion,
the
format
so
that
question
of,
if
it's
not
complex
or
not
complex
enough,
he
more
says
the
current
thing
is
not
too
complex.
So
why
not
go
with
that?
So
that
is
the
state.
If
I
can
also
add
more
position,
I
did
a
detour
recently
on
looking
at
existing
software
called
hypercore
a
protocol
that
is
information-centric
in
the
decentralized
community
that
has
an
access
protocol
and
it
has
an
interesting
constructions.
Also
using
immutable
data
blocks,
hash
pointers,
science
merkle
trees.
D
So
it
has
a
lot
of
the
elements
flic
is
about,
but
they
use
it
differently
at
a
higher
level.
They
build
higher
level
data
structures,
file
systems,
for
example,
and
I
think
we
can
learn
from
that.
It
can
inform
how
the
flick
scope
should
be,
how
to
bill,
have
a
building
a
toolbox
to
do
exactly
such
high-level
data
structures.
So
I,
of
course
I
try
to
get
here
arguments
for
a
more
incremental
way
and
maybe
decomposition
of
the
flick
document,
but
that
is
pending
discussions.
D
There
is
also
that's
not
necessarily
the
truth
to
aim
for
to
look
at
hyper
core
perceive
because
they
don't
have
really
manifest,
but
it
can
hopefully
help
to
give
a
perspective
on
the
discussions
we
are
having
so
summarizing.
I
think
dave
mark
and
me.
We
definitely
agree
that
we
should
participate
in
more
of
the
technical
discussions.
I'm
not
sure
about
chris;
maybe
he
can
join
in.
He
is
now
also
on
that
poll.
B
Said
so
I'll
chime
in
for
a
second,
which
is,
we
tend
to
go
and
fix
some
starts
on
this,
so
I
think
our
high
order
goal
ought
to
be
to
freeze
something
we
can
push
through
and
get
to
experimental,
because
I
think
the
lack
of
something
stable
is
probably
limiting
people's
interest
in
actually
using
manifests
in
major
ways
to
solve
some
of
the
you
know
perennial
problems
we
have
so
I
I
I'd
like
to
try
and
set
a
goal
of
getting
this
through
rg
last
call
by
mid-january.
B
I
don't
know
how
other
people
feel
about
this,
so
we
may
wind
up
taking
a
bit
of
an
axe
to
some
of
the
more
popular
fancy
things
we'd
like
to
do
in
order
to
get
this
through
and
remember
since
this
is
an
experimental
draft,
and
this
is
a
research
group-
there's
no
reason
why
we
can't
you
know,
move
this
to
multiple
versions.
D
Come
on
by
the
way,
yeah,
okay
joining
or
no,
I
think
he
had
a
time
conflict
right,
yeah
ma
couldn't
make
it.
Unfortunately,
right
and
chris
do
you
have
something
to
say
here
nope
this
sounds
good.
D
Okay,
so
I
think
the
is
the
the
four
of
us
should
reconvene
and
see
exactly
how
to
take
the
axe
and
do
something.
Okay
dirk.
Is
that
fulfilling
what
you
had
in
mind
with
that
status?
Update?
Yes,
very
much!
Thank
you.
Christian!
That's
great!
Welcome!.
B
Yeah,
so
I'd
like
to
chime
in
that
there's,
no,
you
know
we're
not
this
isn't
a
design
team
in
the
sense
that
other
people
are
not
encouraged
to
join
us
in
working
on
the
technical
aspects
of
this
right.
We
are
not
over
subscribed
in
people's
input
by
any
stretch
of
the
imagination.
So
folks,
please
take
a
look
at
the
you
know
the
notes
and
the
draft,
and
you
have
ideas
to
to
contribute
and
particularly
help
on
solidifying
the
draft.
That's
obviously
something
we
would
very
much
appreciate.
D
E
D
E
Yeah,
so
maybe
we
should
just
have
I
mean
if
you
might,
if
you
don't
mind
driving
the
the
slides
I
I
don't
want
to
hold
everybody
up
sure
I
can
do
it
yeah.
So
so
I
have
the
the
bullets
come
in
sequentially,
but
you
can
just
basically
like
hit
all
the
bullets
in
one
page.
Have
them
show
up
at
the
same
time?
So
you
don't
have
to
yeah
sorry
about
this.
F
G
E
Okay,
I
mean
it's
fine
for
me,
okay,
all
right,
so
it's
very
nice
to
be
sorry,
it's
very
nice
to
be
a
part
of
this
discussion,
thanks
for
inviting
me
dave
and
dirk.
E
So
this
is
about
a
few
projects
which
have
two
projects
actually
which
have
been
taking
place
last
couple
of
years
on
using
data,
centric
approaches
to
look
at
data
intensive
science
areas,
and
so
this
encompasses
a
lot
of
very
important
science
areas
such
as
large
hadron
collider.
E
This
is
for
high
energy
physics,
which
has
been
responsible
for
many
famous
discoveries,
including
higgs
boson,
for
instance,
many
nobel
prize
discoveries,
large
synoptic
survey
telescope,
is
another
application
that
is
going
to
be
the
next
big
thing
in
astronomy.
Scare
kilometer
array
is
another
project
in
astronomy.
E
Astrophysics
genomics
is
another
application,
so
these
are
all
scientific
domains
and
applications
which
involve
many
scientists
around
the
world,
involving
enormous
data
volumes
and
have
very
common
needs
in
terms
of
data
data,
delivery,
distributed
computation
and
what
these
projects
were
about
is
using
a
data
centric
v
approach
to
solving
these
big
data,
intensive
problems
within
these
areas.
So
next
please,
so
these
data's
intensive
applications
face
a
similar
set
of
problems.
E
So
I
mentioned
there's
high
energy
physics,
lsst
ligo
is
gravity
wave
genomics,
so
the
system
challenges
having
to
do
with
how
to
index
data,
how
to
secure
the
data,
how
to
store
the
data,
how
to
distribute
the
data,
how
to
analyze
the
data
and
how
to
learn
from
the
data.
E
So
these
are
very
common
themes
that
run
through
all
of
these
different
areas
and
all
of
these
things,
these
tasks,
these
problems
have
to
be.
Tasks
have
to
be
accomplished
using
coordinated
use
of
computing
storage
and
network
resources,
and
these
resources,
while
they
are
getting
more
abundant,
they're
still
limited,
and
the
rate
at
which
the
data
volume
is
increasing
is
still
far
surpassing.
E
The
rate
at
which
these
resources
are
increasing.
So
now
what's
happening
right
now,
and
after
of
course,
collaborating
with
many
people
in
the
physics
area
and
also
now
starting
in
the
genomics
area.
E
We
found
that
these
different
domains
are
dealing
with
similar
problems,
however
they're,
basically
solving
their
own
solution,
solving
their
own
problems
in
isolation,
more
or
less.
Each
of
these
domains
is
developing
their
own
solutions,
which
tend
to
be
incremental,
because
these
experts
are
not
networking
experts,
they're
domain
science,
experts
and
a
lot
of
efforts
tend
to
be
replicated
across
these
different
domains,
and
one
may
ask
why
that
is
why
we
have
this
problem
that
you
have
this
replication
of
efforts
and
incremental
solutions.
E
Part
of
the
problem,
I
think
that
one
can
identify,
is
this
gap
which
exists
between
the
application
needs
and
existing
networks
and
systems,
and
so
I
don't
think
I
need
to
convince
people
here
that
we
have
this
gap
and
and
the
applications
are
really
care
about.
E
They're,
really
caring
about
the
data,
whereas
the
networks
and
systems
tend
to
focus
on
addresses
processes,
servers
connections
and
the
security
solutions
also
focus
on
securing
the
data,
containers
and
delivery
pipes,
and
so
because
of
this
of
this
gap,
these
domain
experts
have
to
basically
cook
up
their
own
ad
hoc
solutions
to
need
to
meet
their
data
needs
given
their
existing
systems,
and
this
is
caused
to
this
kind
of
situation
that
we
have
today.
E
So
what
we're
taking?
What
we're
doing
in
these
projects
is
to
take
a
data,
centric
approach
to
a
system
in
network
design
and
providing
system
support
through
the
whole
data
life
cycle,
from
the
production
of
the
data
naming
the
data
securing
the
data
directly
to
delivering
the
data
using
names
and
enabling
in-network
caching,
for
instance,
is
that
one
is
a
very
big
key
functionality.
That's
required
by
these
applications,
automated
joint
caching
and
forwarding
multicast
delivery.
E
So
so
we
got
to
into
this
a
couple
of
years
ago
when
harvey
newman
who's
a
very
well
a
very
well-established
and
well-connected
physicist,
high
energy
physicist
at
caltech
actually
approached
me
because
they
he
had
heard
of
indian
and
said
you
know
why?
Don't
we
try
to
use
ndn
to
speed
up
things
in
the
high
energy
physics
network?
E
So
this
was
a
great
opportunity
and
you
know
we
started
collaborating
and
we
applied
for
this
project,
which
was,
thankfully
it
was
funded
by
the
nsf
back
in
2017
called
sandy
sdn
assisted
indian
for
data
intensive
experiments.
E
So
the
pis
were
myself
harvey,
newman
and
christos
papadopoulos
at
colorado
state
and
then
because
he
had
gone
to
dhs
said
that
the
project
was
taken
over
by
craig
partridge
and
this
project
has
been,
I
think,
very
fruitful,
and
it
has
been
supported
with
by
the
other
lhc
sites
in
heart
of
the
lhc
sites
and
as
well
as
the
ndm
project
team.
E
So
the
approach
here
is
to
use
ndn
to
redesign
the
lhc
large
hadron
collider
high
energy
physics
network
to
and
to
try
to
optimize
the
workflow.
And
so
what
are
the
main
things?
We
did,
I
would
developed
an
indian
naming
scheme
for
fast
access
and
efficient
communication
in
heaven
and
and
extensible
to
other
fields.
E
We
deployed
indian
edge
caches
with
ssd
as
multiples
at
multiple
sites
which
are
connected
to
the
high
energy
physics
network,
and
we
looked
at
simultaneous
optimization
of
caching
of
hot
data
sets
and
forwarding
hot
data
sets
or
data
sets,
which
a
lot
of
different
scientists
tend
to
access
in
the
high
energy
physics
network.
Next,
please.
E
So
what
are
the
results
from
sandy?
So
this
has
been
going
on
for
a
few
years,
the
feasibility
of
an
ndn-based
data
distribution
system
for
lhc
was
first
demonstrated
at
sc
the
super
convenient
conference
in
2018,
where
we
actually
demonstrated
a
system
which
had
a
redirector
from
the
system
which
directs
requests
within
the
hep
network
called
x-ray
d.
E
It
redirected
these
extra
d
requests
to
an
ndn
based
system
which
would
deliver
the
data
back,
and
so
that
was
demonstrated
in
sc18
in
sc
19
last
year
we
showed
a
great
greatly
improved
throughput
and
delay
performance
and
it
had
three
major
components
in
the
implementation.
One
is
the
the
joint
caching
and
forwarding
algorithm
that
we
developed
in
northeastern
called
vip.
We
actually
coded
that
up
and
implemented
it
within
the
this
high
energy
physics
network
or
this
testbed
network
that
we
put
together.
E
We
also
integrated
that
with
the
high-speed
nda
dbdk
forwarder
developed
by
nist
and
that
they
actually
developed
finished
the
development
very
shortly
before
the
sc19.
It
was
quite
a
development
effort
to
integrate
these
two
things,
but
thankfully
that
was
successful
and
then
there
was
an
indian
dpdk-based
consumer
and
producer
which
was
developed
by
caltech
or
also
around
that
time.
E
So
all
three
components
were
put
together
for
this
demo
and
this
demo
took
place
over
a
transcontinental
layer,
two
demo
testbed,
which
ran
from
the
sc
19
demo
floor
to
caltech,
northeastern
and
colorado
state.
So
this
involved,
of
course
working
with
a
lot
of
partners
in
internet
to
es
net
and
scenic
and
so
forth
to
just
put
together
these
vlans
at
layer
two
to
put-
and
this
in
itself
of
course
required
a
lot
of
work.
So
this
was
an
actual
demo
over
a
wide
area.
E
Network
live
demonstration
and
it
was
taking
place.
You
know
in
with
two
nsf
pm
sitting
right
in
front
of
us,
so
we
really
had
to
make
it
work
and
thankfully
it
did
so
we
achieved
over
6.7
gigabits
per
second
throughput.
E
This
is
a
single
thread:
implementation
between
the
indian
dbdk
based
consumer
and
producer
over
the
west
wired
area
network
using
using
ndn,
and
we
also
so
that
was
the
throughput
performance.
E
We
also
used
the
optimized
caching
forwarding
algorithms
to
decrease
download
times
by
a
factor
of
10.,
so
this
this
was
a
quite
a
quite
a
success
from
our
point
of
view,
after
a
few
years
of
development
and
implementation
and
integration
next,
please
yeah.
So
this
is
just
a
screenshot
of
the
throughput
which
we
obtained.
So
if
you
could
look
at
the
left
upper
left
hand
corner
there,
you
see
the
6.71
gigabits
per
second.
This
was
photographed
and
on
the
floor
there.
Next,
please.
E
The
bottleneck,
yeah,
I
think
the
the
bottleneck
was,
I
think,
the
forwarder.
Basically,
I
mean
the
forwarder
if,
if
the
this
performance
is
very
much
in
line
with
the
you
know
what
the
default
performance
that
was
obtained
by
nist
in
their
in
their
testbed,
using
of
course,
they
went
to
multi-threaded
implementations,
which
increase
the
throughput
linearly,
actually
with
the
number
of
threads.
So
here
was
this
was
a
single
threaded
implementation.
E
Thank
you,
yeah
sure,
yes,
so
yeah,
so
that
was
sandy
and
that
was
sandy
is
actually
still
running
in
this
last
extension
here.
But
just
this
october,
we
received
another
grant
from
the
nsf
called
indies,
and
these
it's
called
andes,
like
the
andes
mountains,
ndn
for
data
intensive
science
experiments.
So
this
you
could
look
at
it,
basically
as
an
extension
of
the
follow-on
to
to
sandy,
because
sandy
is
ending.
E
So
now
the
team
is
with
northeastern,
with
myself
as
being
pi
harvey
again
and
ucla,
where
lisha
and
jason
zhong
jason
kong
are
the
co-pis
and
at
tennessee
tech
susmit,
shannon,
who
has
become
faculty
there
recently
and
formerly,
was
with
christos
at
colorado
state,
his
co-pi.
E
And
so
what
is
this
new
project
about?
Well,
it's
about
basically
pushing
the
envelope
further
on
this
project,
so
we
have
big
challenges.
Coming
up
for
lhc
lec
data
volumes
are
to
grow
10
times
due
to
high
luminosity
lhc,
which
is
coming
in
2027..
E
We
also
this
project
is
also
going
to
look
at
the
genomic
applications,
the
human
genome
data
and
the
earth
biogenome
data
is
also
is
hitting
the
exobi
range
okay,
and
also
in
this
project,
we're
going
to
focus
more
on
the
fact
that
we
have
to
need
to
use
diverse
computation,
storage
and
networking
resources
basically
have
to
use
everything
that
we
have
to
accomplish
the
task
we
wanted.
We
want
to
accomplish
in
these
applications,
so
the
approach
is
to
build
a
data
centric
ecosystem
to
provide
agile,
integrated,
interoperable
scalable,
robust
and
trustworthy.
E
E
So
this
is,
it's
got
a
lot
of
ingredients
that
would
would
make
the
the
system
much
more
high
performing.
E
The
goal
is
to
deliver
lhc
data
over
a
wide
area
network
at
throughputs,
near
100,
gigabits
per
second
and
dramatically
decrease
download
times
by
using
optimized
caching
and
we're
going
to
have
an
enhanced
test
bet.
So
this
is
going
to
build
on
the
sandy
test
bed
with
with
more
ndn
data,
cache
servers
for
to
fill
out
the
the
andes
test
bed.
E
So
some
of
the
research
agenda
just
to
be
a
little
bit
more
specific,
we're
going
to
bit
so
in
order
to
increase
the
throughput
towards
100
gigabits
per
second
we're
going
to
look
at
multi-threaded,
consumer
and
producer
applications
and
aim
for
linear
throughput
scaling,
and
this
was
evidenced
by
the
experiments
at
nist
on
the
dpdk
forwarder.
E
And
of
course,
now
we
have
to
build
the
consumer
producer
applications
as
well
as
look
at
the
multi-threaded
versions
of
the
caching
algorithm
forwarding
algorithms
combined
caching
forwarding
algorithms
we're
going
to
look
at
containerization
in
order
to
for
to
deal
with
diverse
server
equipment
and
interfaces
specifically
to
use
docker
containers
to
host
guest
os's
for
state
restoration,
to
ease
upgrading
we're
also
going
to
look
at
data
integrity
and
providence.
So
as
an
immediate
goal,
we're
going
to
look
at
data
origin
authentication.
E
So
in
these
high
energy
physics
applications
and
these
data
science
applications
security
is
not
a
foremost
consideration
at
the
moment,
because
they
basically
keep
access
to
these
systems
to
be
very
restricted.
You
have
to
apply,
and
you
wait
a
long
time
to
get
approved
once
you
get
in
the
system
you're
in
the
system,
but
I
think
the
the
authentication
of
the
data
is
still
extremely
important,
so
we're
going
to
start
with
that
and
look
at
the
use
of
data
manifests
to
for
data
authentication,
which
is
something
that
ndn,
of
course,
is
very
good
at.
E
So
we're
going
to
start
with
that
later
on,
as
we
go
along
with
the
with
the
project
and
later
in
the
project
we
we
could
also,
we
would
also
possibly
look
at
provenance
tracing,
which
is
the
the
the
idea
that
data
is
going
to
be.
E
You
know,
the
output
data
of
the
one
researcher
might
be
used
as
the
input
data
for
the
next
research
for
insular,
and
so
it's
very
important
to
trace
the
provenance
of
the
data,
but
that's
going
to
come
later
in
the
project
and
possibly
next,
please
we're
also
going
to
look
at
congestion,
control
and
retransmission.
E
This
is
something
that
sandy
did
not
have,
though,
that
work
that
we're
gonna
active,
look
actively,
look
at
and
then
there's
of
course,
a
lot
of
things
to
draw
on
in
that
respect,
from
ndn
we're
going
to
look
at
multi-threaded,
caching,
forwarding
optimized
caching
forwarding
for
the
fifth
algorithm
and
we're
gonna
look
at
hierarchical
caching
systems
because
of
the
data
volume
that's
present
in
these
applications.
E
It's
just
not
practical
to
simply
look
at
ram,
so
one
has
to
look
at
all
kinds
of
storage:
one
could
get
your
hands
on
economically
and
that
involves
ssds,
intel,
optane,
etc,
and
but
then
of
course
all
of
these
have
different,
read,
read
and
write
speeds,
so
they
have
different
durability
and
so
forth.
E
So
one
has
to
take
that
into
account
in
any
kind
of
caching
algorithm
that
one
develops
so
we're
going
to
actively
look
at
that
and
then
we're
going
to
look
at,
as
we
mentioned,
as
I
mentioned,
on
fpga
acceleration.
So
these
are,
of
course,
being
used.
E
They've
been
suggested
that
it
that
would
be
used
for
for
forwarding
name-based
forwarding
in
mdn,
but
we
have
a
specific
way
we're
going
to
use
it
here
in
this
project,
basically
in
the
indian
dpdk
forwarder
there
are
these.
There
are
these
tables
which
are
used,
the
name
dispatch
table,
the
combined
pit
cs
composite
table
and
the
fib
and
a
lot
of
these
lookup
functions
and
the
hash
functions
will
be
accelerated
using
fpgas
and
jason
kong
is
going
to
lead
the
effort
in
doing
that.
E
We're
very
helpful
hopeful
that
this
this
fpga
acceleration
can
give
us
a
lot
of
performance
boost
in
this
case.
Next,
please
all
right
so,
just
to
conclude
here,
this
was
kind
of
a
a
really
short
tour
of
what
we've
been
doing
so
data
intensive
science
applications
require
fundamental
networks
and
system
solutions
to
address
very
common
needs,
and
we
believe
that
ndn
provides
a
data
centric
system
support
through
this
whole
data
life
cycle
is
a
very
good
fit
for
data
intensive
science
as
a
natural
fit
for
lhc
genomics
and
other
data
intensive
applications.
E
We
have
shown
in
the
sandy
project
already
high
performance
high
performing
indian
system,
with
a
throughput
of
6.7
gigabits
per
second
demonstrated
live
on
the
on
the
on
the
at
the
se19,
using
the
ndnd
pdk
forwarder,
as
well
as
the
vip
optimized
capturing
forwarding
and
with
the
new
project
andes
we're
going
toward
the
first
prototype
production,
ready
indian
system
integrated
with
fpga
containerization,
with
support
of
sdn
operations
that
caltech
has
been
looking
at
and
we're
very
we're
very
hopeful
that
we
can
build
a
very
good
system
for
high
energy
physics
genomics
and
by
leveraging
that
we
see
long-term
collaboration
with
other
domain
sciences
and
and
and
to
develop
this
common
framework,
which
would
work
for
these
data
intensive
applications
and
work
with
many
other
communities
all
right.
D
E
Yeah
good
question,
so
you
know
we're
going
to
look
at
congestion
control,
a
part
of
the
system,
so
you
know
yeah.
I
assume
that's
what
you
mean
by
elastic
and
I
mean
so
the
it's
in
these
high-energy
physics
networks.
It
is.
It
is
definitely
possible
to
to
control
the
input
rate
in
the
sense
that
you
know
basically
slow
down
requests
if
the
network
gets
congested.
E
That's
what
happens
in
practice,
and
so
I
I
think
the
system
is
elastic,
basically,
because
the
delays
actually
which
are
being
tolerated
over
the
system
is
way
too
big,
actually
sometimes
on
the
order
of
you
know
hours-
and
you
know
days
so
because
these
are
really
large
jobs,
and
so
I
would
probably
characterize
it
as
elastic
so.
D
E
Yeah,
so
it's
actually,
it's
not
only
a
content
delivery
network,
it's
also
a
computation
network,
because
you
have
these
computation
and
analysis,
jobs
that
are
done
over
this
network.
So
actually
there's
a
lot
more
going
on
than
just
content
delivery.
A
E
Thanks
ken
so
yeah,
I
didn't
have
time
to
go
into
the
structure
of
the
data
too
much
so
they
they're
it's
kind
of
hierarchically,
structured,
they're.
The
big
data
sets,
and
then
there
are
so.
Unfortunately,
I
can't
share
my
screen
right
now.
E
The
the
caching
and
forwarding,
for
instance-
and
the
caching
is
business
specifically,
and
we
actually
did
a
lot
of
study
as
to
what
granularity
we
wanted
to
work
with,
and
it
has
a
lot
to
do
with
the
popularity
distribution
how
fast
it
falls
off
at
different
granularities,
and
we
we
found
that
at
the
data
block
level,
where
this
the
falloff
was
was
very
desirable,
meaning
you
could
track
a
relatively
small
number
of
data
blocks
and
be
able
to
capture
most
of
the
popularity.
E
So
that's
where
we
kind
of
the
caching
algorithms
were
designed
for
and
did
we
do
anything
interesting
with
names
so
for
the
high
energy
physics
application.
Actually
there
is
a
very
straightforward.
E
They
already
have
a
very
well
established
hierarchy
called
naming
scheme,
and
the
nice
thing
about
energy
physics.
Is
that
it's
a
naming
scheme
which
everybody
kind
of
agrees
with
and
because
it's
a
very
hierarchical
system
and
because
all
the
data
is
generated
by
essentially
one
location
which
is
stern
right.
So
and
and
so
it's
not
a
situation
where
you
know
you
have
many
different
producers
of
data,
so
so
there's
a
relatively
straightforward
translator,
which
you
can
build,
which
translates
from
high
energy
physics
names
to
ndn-based
names.
E
So
in
that
sense
it's
not
interesting,
but
it's
also.
You
know
very
good
that
we
have
something
like
that.
It
and
now
in
the
genomics
application
it's
quite
different
and
that's
what
one
of
the
situations
we're
very
interested
in
and
also
challenged
by,
which
is
that
in
the
genomics
case,
you
have
static,
static
data,
but
you
also
have
a
lot
of
dynamic
data
which
is
being
generated
by
different
players
around
the
world
and
they
may
have
very
different
naming
schemes.
E
And
so
how
do
we
do
real-time
discovery
of
new
data
sets
and
how
do
we
adapt?
You
know
the
forwarding
and
the
caching
and
all
those
functionalities
in
this
kind
of
situation
is
a
real
challenge.
So
you
know,
susmit
has
been
working
with
various
collaborators
in
the
genomics
area
and
he's
gonna
be
a
key
person
on
this
team
to
to
take
what
we
have
and
to
to
generalize
it
to
the
genomics
application.
A
E
One
okay
question
by
eve:
yes,
so
eve
asks:
can
you
share
more
about
what
kinds
of
computation
you're
placing
or
throughout
the
network?
E
So
there
are
actually
all
kinds
of
computations
that
go
on
in
these
networks,
because
so
raw
data
is
actually
something
that
is.
You
know
the
vast
majority
of
the
raw
data
is
actually
thrown
away.
It's
it's
not
kept
around
because
just
too
many
too
much
data,
and
so
there's
all
kinds
of
initial
processing
that
goes
on
and
so
they're
various
stages
after
the
initial
processing
there's
further
processing
and
people
use.
E
Even
though
they're
using
the
same
data
process
data,
they
use
their
own
algorithms,
of
course,
to
to
to
hunt
for
particles
right.
So
so
there's
a
lot
of
computate
different
types
of
computation
that
you
would
have
to
do
over
over
the
network.
And
usually
you
know
you
have
to
schedule
a
certain
amount
of
time
on
some
server
and
then
you
have
to
pull
the
data
set
and
then
you
have
to
run
it.
So
all
of
this
actually
is
being
coordinated
in
in
within
this
high
energy
physics
network.
E
You
know
by
clever
people
but
they're,
not
designed
using
you
know,
using
networking
sort
of
fundamental
networking
principles
necessarily
so
so
I
think
the
answer
is
it's
basically
that
there
are
all
kinds
of
computation
going
on
there
there
there
is.
There
are
filtering
operations,
there
are
learning
operations,
there
are
inference
operations
that
are
happening
and
they're
being
run
by
different
people
and
they're
all
different
algorithms,
and
they
have
to
be
situated
in
different
places
in
the
network.
E
So
it's
actually
a
very
it's
very
interesting
and
challenging
problem,
but
just
the
data
delivery
part
is
currently
what
we're
focusing
on.
But
that's
already
a
big
piece
of
the
puzzle.
A
Okay,
so
next
we
have
namsan
kyo
on
a
blocker-based
pub
pop-up
system
for
ndn,
and
I
try
to
make
a
presenter
again.
Let's
see
how
that
works.
H
A
A
H
Okay
hi:
this
is
namsoko
from
adrie
korea
and
I'm
happy
to
introduce
our
research
research
here
in
icnrg.
Actually,
there
are
several
people
involved
in
this
project
and
I
present
this
representing
them.
The
title
is
a
broker-based
pop
subsystem
for
ndn.
H
But
the
problem
is
that
they
are
not
that
scalable
and
I
wouldn't
go
much
detail
on
their
part
here,
though,
but
they
are
not
scalable
scalable
enough
and
demo3
can
use
the
enterprise
network
and
they
are
limited,
especially
in
low
powered
iot
devices.
The
devices
cannot
handle
a
lot
of
subscribers
at
the
same
time,
and
the
second
issue
is
that
they
are
not
flexible
as
in
ip
based
approaches.
H
So,
based
on
the
the
problem
statement
in
the
previous
slide,
we
set
up
the
design
directions.
First,
in
order
to
cope
with
issues
on
the
low
power
low
performance
producers.
H
H
Mqtt
like
wildcard
topping
matches
are
separated,
such
as
a
single
level,
wildcard,
plus
a
sign
and
multi-level
white
card
shop
sign,
and
we
can
say
that
the
topics
are
defined
by
subscribers,
because
producers
just
publish
data
with
their
names
and
subscribers
select
their
topic
based
on
the
published
data
names.
So
we
can
say
that
topics
are
just
defined
by
subscribers
in
our
design.
H
So
in
our
architecture,
we
use
multiple
brokers,
which
are
also
called
as
lambda
on
london
nodes.
They
do
the
brokering
of
publishers
and
subscribers,
as
the
names
indicate,
they
also
still
publish
data
for
limited
performance
publishers.
Data
can
also
be
stored
in
devices
themselves
and
other
external
repositories.
According
to
the
configuration
and
published
data,
names
are
managed
by
a
distributed
hash
table
dht
on
on
those
brokers.
H
H
So
we
can
find
more
information
when
you
go
over
the
the
signal
to
those
in
the
following
slides.
H
A
service
prefix
ln
is
used
for
this
service
pop-up
service
in
this
design
and
any
publishers,
and
subscribers
can
reach
the
nearest
broker
using
the
name
and
also
each
broker
has
his
own
name.
H
We
defined
on
the
following
name
scheme
like
this:
for
data
data
stream,
name
comes
first
and,
and
then
sequence
number
is
followed,
for
example,
for
the
temperature
in
room
385
in
building
1793.
H
There
is
the
stream
name,
data
stream
name,
and
they
said
stream
name.
You
can
publish
the
data
you
with
the
sequence
number
and
for
the
secret
of
command
service.
Prefix
comes
first,
I
mean
this.
One
rn
comes
first
and
command
command
will
be
followed,
the
command
will
be
explained
in
the
next
slide
and
the
data
name
is
followed
by
that.
I
mean
after
that.
So,
for
example,
rln
comes
first
and
paps
is
pub
published
advertisement
I'll.
Just
explain
that
in
the
next
slide
and
the
name
for
the
data.
H
We
defined
the
several
protocol
messages.
They
are
also
comments
so
to
publish
published
data
ourselves
with
typhoon
pa.
They
can.
It
can
advertise
the
name
of
a
data
stream
to
publish
and
publish
on
advertisement.
It
can
cancel
the
publish
of
a
data
stream
and
publish
data.
It
can
actually
publish
a
data
itself
and
subscribe
in
the
subscribe
procedure.
H
Subscribe
topic
subscription,
though
the
name
is
a
little
weird
but
it'll
subscribe
to
topic
to
request
the
topic
manifest
and
topping
maintenance
will
include
a
list
of
data
rn
holding
subscribe
data
streams
and
subscribe,
manifest
requests.
It
will
it'll
request
the
data,
many
to
a
specific
data
rn
and
it
will.
The
data
manipulation
will
include
data
names
for
a
data
stream
and
then
the
last
one
is
the
skype
data
request.
H
Subscribe
data
requests.
It
will
request
the
data
itself
to
a
specific
data
rn,
so
each
procedure
will
be
explained
step
by
step.
H
So
that
is
the
logical
separation
of
popping
management
and
data
management,
and
there
are
multiple
publishers
here
and
multiple
brokers,
even
if
they
are
separated
here,
but
those
are
actually
can
be
co-hosted
on
the
same
physical
machine
and
then
publisher,
1
and
2
and
3
will
publish
data
with
this
topic.
Name
topic
two
topic,
one
a
topic
one
b
and
when
the
publisher
one
will
advertise
this
topic
name,
then
the
nearest
ln
rn1
here
in
this
case
will
has
the
topic
topic
two.
H
If,
if
it
is
hash
it
then
actually
it
will
indicate
that
the
the
topic
rn
is
rm1,
and
for
this
one
it
will
be
topic,
rn,
4,
and
also
for
this
one.
After
the
hash,
it
will
be
topic.
One
topic
rm4
as
well.
H
So,
okay.
H
H
H
Okay,
so
when
the
the
messages
arrived
to
the
topic
rn,
then
the
topic
iron
will
manage
topic
three
based
on
the
names.
So
in
this
case
topic
two
and
topic
2
is
is
actually
came
from
rn1
and
here,
if
you
guys,
if
you
see
this
topic
3
then
topic
1a
is
from
rn2
and
topic.
H
1Ab
is
from
rm4,
so
I
already
explained
that
publisher
in
in
this
case,
two
publisher,
publisher,
2,
will
advertise
the
data
stream
this
data
stream,
and
if
it
has
topic
one,
then
it
will
indicate
rm4
for
the
topic
management,
so
it
will
arrive
to
rm4,
then
in
import.
It
will
insert
the
name
into
topic
three.
So
here
in
this
case,
b1a
is
included,
included.
H
And
advertisement
is
also
very
similar.
Besides
it
is,
it
will
delete
the
topic,
the
name
from
the
topic
tree
and
after
it
is
advertised,
then
real
data
will
be
published
so
from
each
publisher.
H
It
will
publish
data
so
the
first
from
this
topic
to
the
first
sequence,
the
first
data
will
be
published,
so
it
will
be
published
to
publish
it
to
rn1
because
it
is
near
iron.
One
is
the
nearest
one
to
a
pop
pop
one,
so
in
rn
one
it
will
update
data
manifest,
I
mean
and
also
data
can
be
stored.
Actually
we
have
several
options.
H
Data
can
be
stored
here
in
rn,
or
data
can
be
still
be
located
in
publishers
and
also
if
there
are
some
other
external
repository
systems,
then
it
can
also
be
stored
in
the
external
laboratories.
H
So
the
data
manipulates
will
have
the
information
data
is
actually
will
be
installed
here
so
and-
and
also
in
this
case,
the
first
sequence
number
is
was
arrived,
so
it
has
the
information
on
the
data
here
on
other
rs.
They
also
do
the
similar.
I
mean
of
similar
works
when
the
data
are.
H
Published
data
are
stored
and
data
manifest
files
for
the
topics
updated
in
hrn.
As
I
said,
data
can
be
stored
where
data
can
be
stored
in
other
main
areas
like
in
the
devices
where
the
external
depositories.
H
For
the
small
data
they
we
actually
included
data
in
the
interest
interest
for
the
small
data,
but
if
the
files
are
lost,
then
data
is
just
published
to
rln.
Then
rn
will
pull
the
data,
so
we
have
to
separate
approaches
for
different
file
sizes
and
then,
if
the
virus
data
there
are
published
it,
then
a
subscriber
can
subscribe
to
the
topic.
H
So
in
this
case
a
subscriber
one,
it
will
subscribe
to
topic
one
with
wildcard
shop,
so
it
actually,
it
can
include
topic,
one
a
or
b
or
if
there
are
c
then
c
can
be
also
included.
But
in
this
case
there
are
only
two
topics.
I
mean
two
data
names.
Data
streams
were
published
so
topic,
one
and
topic
w1a
and
topic.
1B
are
only
included
for
this
topic.
So
when
the
topic
subscribe,
the
subscription
is
is
sent
to
the
expand
to
rn.
Then
it
also
has
a
hashtag
topic.
H
So
it
just
know
that
the
topics
are
made
in
topic
rn4.
Then
it
will
pass
the
topic
manage
manifest
and
the
topic
managers
will
have
the
information
about
the
each
topic
like
topic,
1a
and
topic.
1B,
so
topic
1
is
is,
is
managed
in
rn
2
and
topic
1bs
matched
in
rm4.
H
So
this
is
a
procedure.
You
can
see
that
as
well
and
after
receiving
the
topic
data
manipest
and
it
will
request
the
real
data
to
rn2
and
rm4.
So
it
will
recast,
rn
type,
1a,
1a
and
b1b
to
rn
4
rand
to
m4.
H
Actually,
after
retrieving
data
manifest
after
retrieving
data
manifest
in
the
data
manifest,
it
has
the
information
on
the
data.
So
in
this
case,
for
for
data
for
of
topic,
1a
already
three
actually
from
one
to
three,
three
data
were
published,
so
it
can
patch
from
one
to
three
from
subscriber
one,
the
same
to
the
rm4,
and
it
is
explained
it
is
very
understandable
and
yeah.
It
requests
the
data,
as
I
said
after
based
on
the
the
information
on
the
data,
may
pass.
H
And
this
is
the
software
of
functional
blocks.
I
think
I
don't
need
to
go
over
the
details
on
this
blocks,
but
we
have
we
implemented
our
prototype
using
python.
So
there
are
many
utilities,
so
topic
management
and
topic
can
be
also
made
in
try
or
just
press.
H
And
this
software
will
be
open
to
the
public
soon
after
we
clean
up
the
software
and
even
if
we
finished
most
of
the
software,
but
after
cleaning
up
some,
I
mean
box
and
then
clean
up
the
source,
then
we'll
open
the
source
to
the
public,
and
this
is
the
demo.
I
think
it
is
a
little
bit
small,
but
I
think
I
hope
you
can
see
this.
H
There
are
three
brokers
in
this
case.
Three
brokers,
and
this
one
is
is
publisher
and
this
one
is
subscriber
and
publisher
is
attached
or
or
is
near
to
broker
one,
and
this
subscriber
is
near
to
broker
two.
So
after
starting
three
brokers,.
H
So
in
this
case
I
mean
two
name:
streams
streams
will
be
published
like
at
37d
room,
385
temp
and
at
3,
3
7
d
run
to
15
temp,
so
two
streams
will
be
published.
H
So
if
it
is
published,
then
actually
the
message
will
be
arrived
in
in
local
one
because
broken
is
the
nearest
one
and
it
will
advertise.
H
So
the
after
advertising,
the
first
data
stream-
you
can
see
the
stream
was
arrived
here
and
then
after
hashing
the
the
topic
it
it
will
decide.
The
topic
has
to
be
managed
in
rn3
here.
So
the
message
is
delivered
to
rn3
to
so
that
the
rn3
can
manage
the
topic
here,
the
same
for
the
the
second
data
stream
at
3r,
7d
room,
2,
15
10..
H
So
if
you
check
the
theaters,
you
can
also
provide
some
utilities
to
check
the
the
three
topic:
management
status
and
data
management
status
in
each
client.
So
we
can
check.
H
There
are
two
streams
were
advertised
and
the
the
two
streams
are
managed
in
broker
two
broker
1.
H
Then
if
we
publish
data,
so
three
data
will
be
published,
so
it
just
arrived
here.
So
three
data
were
arrived
here,
then
the
three
data
will
be
stored
in
local
one
and
also
data.
Manifest
main
space
file
was
updated
here.
H
For
the
set
the
second
data
name,
so
we
can
check
the
setters
and
one
data
stream
was
published
and
for
the
second
data
stream
we
also
published
three
data.
Then
we
can
also
check
that
two
data
streams
stream
was
were
published.
So
actually
the
all
the
data
were
stored
here
in
brook
one,
and
the
data
topic
will
be
managed
in
broker
3..
D
H
So
before
running
my
subscription,
we
can
check
the
status,
so
we
could
see
the
same
result
as
you
can
see
in
the
broker.
One.
I
mean
publisher,
one
so
two
data
stream
or
published,
so
we
can
subscribe
to
f37d
with
help.
So
all
the
data
streams
with
I
mean
this
prefix
at
37d
to
15
and
33
885
temperatures
will
be.
H
So
it
patches
the
the
topping
manifest
so
in
the
typing
mesh
test,
you
can
see
those
two
streams
managed
in
rn1
so
to
rn1.
It
will
catch
data
very
past.
H
So
after
patching
the
data
may
pass,
they
can
check
it
can
check
checks
that
check
that
it
has.
H
So
there's
the
simple
demonstration
and,
as
I
said,
this
code
will
be
published
soon,
so
we'll
announce
that
through
the
mailing
list
to
this
icnrg
and
also
ending
community
soon.
H
So,
in
summary,
we
developed
a
broker-based
pops-up
system
for
ndm
and
we
argued
that
it
is
scalable
and
flexible
than
existing
ndn
mechanism.
I
mean
existing
approaches
and
we're
gonna
release
our
coordinates
open
source
software
soon.
A
Great
first
of
all,
thanks
a
lot
for
bringing
your
work
to
ic
energy
and
I'm
doing
this
nice
nice
demo.
So
you
already
have
a
few
questions
in
the
chat,
not
sure
you
can
see
that.
A
Okay,
let
me
check
that
okay,
nice,
so
the
first
one
I
can
read
it
to
you
is
by
dave.
A
So
how
is
deletion
accomplished
so
by
republishing
a
topic
manifest
without
the
data
that
you
want
to
delete
and
are
you
using
manifest
versions
for
that.
H
No
we'll
just
I
mean,
as
you
can
see
in
the.
H
So
this
is
the
publish
on
advertisement.
It
will
use
the
same
approach
like
rln,
then
the
command
published
on
advertisement
and
the
name
for
that.
Then
it'll
also
has
the
same
way.
So
it
will
know
that
the
topic
is
managed
in
in
certain
rn.
Then
it
will.
The
message
will
be
delivered
to
the
the
rn
which
manages
the
topic.
Then
it
will
search
the
topic
three
using
the
name.
Then
it
will
remove
the
topic
from
the
tree.
A
How
how
are
subscribers
notified
if
there's
new
data
in
the
brokers.
H
Our
subscribers
notified
there
there's
new
data
in
the
brokers,
so
actually
I
mean
I
think
I
mean
most
of
you
already
knows
that
I
mean
in
and
then
approach
where
you
know.
I
see
an
approach,
it
is
pull
based.
So
we
have
to
check
that
periodically.
If
we
wanna,
I
mean
know,
there
are
new
data,
so
we
have
to
check
politically.
H
Okay
and
the
next
one
is
brokers,
enters
the
central,
centralistic
and
does
virtual
element
into
the
system.
Can
it's
not
be
a
broker?
Let's
make
it
a
peer-to-peer
system?
Would
this
be
able
to
survivor
network
partition?
H
I
guess
I
mean
I
think
we
can.
I
mean
if
you
generalize
this
approach,
then
yeah
as
you
can,
as
you
said
it
you
may
make
each
node
as
a
broken
node.
Like
I
mean
many,
I
mean
you
can
extend
this
like
ipf
patch
or
something
like
that.
Then
I
think
yeah.
We
can
think
about
that
way.
We
can
fully
decentralize
the
brokers
in
that
way.
A
Yeah,
so
that's
my
my
question.
So
in
the
beginning
you
said
you
are
aiming
for
a
more
scalable
approach
than,
for
example,
psync.
H
Yeah
yeah,
that's
right,
so
I
mean
that
I,
if
I
have
to
explain
more
detail
on
on
these
things,
I
thought
it
will.
I
mean
it
will
take
some
time,
so
I
didn't
do
that,
but
in
psync,
for
example,
impeaching
they're
using
bloom
filter
actually
in
the
name,
so
I
mean
I
think
it
can.
I
mean,
make
it
big
enough.
I
mean
the
name.
H
You
can
make
the
name
big
enough
using
the
balloon
filter,
but
I
think
there's
a
limitation
on
that
first,
so
we
cannot,
I
mean,
extend
the
name
unlimitedly,
so
there's
the
first
scale
of
the
issue
and
also
even
if
it
is
the
partial
sink,
I
think
yeah
there
is
the
first
one.
I
guess.
H
H
H
We
call
that
as
subscription
we
mean
we
receive
the
the
subscript.
I
mean
the
published
information
using
the
data
manifest.
So
we
can,
even
if
you
have
to
check
that
theoretically,
but
we
called
that
a
subscription.
H
A
H
I
think
there
is
a
I
mean,
I
mean
current,
I
mean
way
up
and
the
end
there's
the
indian
way
for
now
and
is
the
chat
being
saved
yeah.
A
Okay,
yeah
great.
Are
there
any
other
questions.
A
Okay,
yeah
thanks
again
looking
forward
to
the
software
release
and
yeah
thanks
for
being
at
the
meeting,
we
know
it's
it's
another
convenient
time
for
you.
A
A
Yes,
thanks
all
right,
so
next
same
time,
zone
actually
is
on
any
n-based
ethereum
blockchain
and
I
make
you
present.
A
So
welcome.
Can
you
share
your
slides,
okay,
coming.
I
G
A
I
I
So
we
believe
that
blockchain
might
play
may
have
many
potential
use
cases,
especially
when
we
think
about
the
decentralized
internet
and
probably
blockchain
may
play
play
some
important
role.
For
example,
we
can
application
like
named
resolution
system
or
identity,
identity
management
or
even
for
the
eki
system.
We
can
use
blockchain
system,
but
right
now
we
don't
have
any
blockchain
system
for
the
icn.
I
And
of
course
we
want
to
support
the
ndm
by
blockchain
research,
because
when
we
move
the
blockchain
system
from
the
ip
network
to
the
nd
network,
we
change
the
communication
model.
So
basically,
we
may
have
some
new
problems,
for
example
in
we
may
have
some
new
consensus
protocol
or
the
results
specifically
for
the
indian
network,
or
we
may
have
some
security
issues
with
the
blockchain
system
over
the
and
the
end
and
another.
And
another
reason
we
want
to
develop.
The
our
blockchain
system
is
to.
I
I
So
in
the
end
we
select
ethereum
because
of
several
reasons,
because
the
first
is
that
it
supports
a
smart
contract
and
which
might
have
many
potential
use
cases
and
currently,
in
the
eco
ethereum
ecosystem.
There
there
have
been
a
lot
of
decentralized
applications
and
the
ethereum
network
has
been
working
securely
for
many
years
and
the
source
code
is
very
stable,
optimized
and
it
is
well
supported
by
some
big
community
and
also
the
interim
platform
is
very
popular
for
the
academic
research.
I
And
but
how
here
I'm
going
to
explain
how
the
data
is
propagated
in
the
ethereum
blockchain
network?
So
basically
the
blockchain
system
is
a
distributed,
a
replicated
system.
So
every
node
tried
to
have
the
same
set
of
data,
so
everything
have
to
be
broadcast
in
the
blockchain
network.
I
So
when
a
node
have
a
new
data,
for
example
like
they
have
a
new
transaction
or
new
blocks,
they're
going
to
send
a
transaction
or
send
a
block
to
the
other
peer.
But
if
the
data
is
small,
the
data
is
pushed
directly
to
the
other
peer.
But
if
the
data
is
lost,
usually
they
push
it
to
some
a
small
number
appear
and
for
the
remaining
peers
they
just
announce
the
identity
of
the
data
and
then
the
other.
We
have
to
request
for
developing
the
data.
I
So
you
can
see
that
in
this
data
propagation
model
we
have
a
lot
of
graphic
redundancy
because,
as
many
copies
of
the
same
object
are
sent
and
received
on
the
network
and
indicate
that
if
you
announce
the
data
and
the
other
node
have
to
download
it
from
you
so
sometime,
it
can
take
very
long
distance
to
download
the
object,
your
data.
I
I
So
we
we
want
to
design
a
blockchain
system
for
the
ndn
network,
so
when
we
start
to
reside
our
blockchain,
we
have
several
concerns,
so
I
and
you
can
see
that
the
the
data
propagation
in
the
blockchain
system
is
similar
to
the
data
synchronization
problem
and
we
already
have
several
protocols
as
the
chronosync
within
your
vector
things.
So
our
first
question
is
whether
we
can
use
the
protocol
for
our
system
and
the
second
second
question
is
whether
we
can
use
we.
I
So
we
analyze
the
the
protocols
and
we
come
to
a
conclusion
that
we
cannot
use
existing
protocols.
It's
due
to
several
reasons,
because
the
the
first
reason
is
that
in
a
blockchain
system,
we
we
expecting
that
node
can
come
and
leave
and
we
we.
We
do
not
manage
the
main
membership
of
the
of
the
node
in
the
system.
But
in
the
existing
protocol,
for
example,
like
chronosync,
usually
you
need
to
manage
the
who
is
in
who
is
joining
the
group
and
the.
I
The
reason
is
that
because
they,
the
they
model
the
state
of
the
the
system
based
on
the
state
of
each
individual
in
beyond
the
the
state
of
every
members.
I
So
they
need
a
membership
management
and
in
the
blockchain
system
we
we
cannot
allow
subscribe
membership
management
and
the
second
problem
is
about
these
scalability
issues
and
because
in
for
the
in
chronosync,
you
have
to
use
the
internet
multicast,
because
when,
when
a
node
in
the
closing
want
to
broadcast
its
state,
it
have
to
use
the
it's
going
to
send
an
interest
message
and
the
internet
message
have
to
be
multicast
to
every
other
member
of
the
group,
and
in
order
to
do
so,
all
the
nfd
have
to
enable
the
multicast
strategy.
I
So
I
we
believe
that
is
not
a
feasible
assumption
for
the
blockchain
system
and
the
the
last
it
sees
is
that
the
existing
protocol
was
not
designed
for
the
system
that
have
malicious
note,
for
example,
if
if
a
malicious
notes
send
some
invalid
state
and
then
the
it
can
cause
all
the
other
node
in
the
closing
system
to
stop
for
state
reconciliation.
I
So
we
believe
that
is
is
in
a
in
a
blockchain
system
is
not
acceptable.
So
we
think
that
we
cannot
use
the
existing
protocol
for
the
blockchain
system
and
for
the
second
question,
question
is
whether
we
can
do
we
need
to
use
a
peer-to-peer
overlay,
and
I
think
we
we
think
it
needs
the
p2p
overlay
for
the
blockchain
system,
because
in
a
blockchain
system,
when
you
receive
a
data,
you
really
have
to
validate
the
data
before
you
propagate
the
data
to
the
the
other
peers.
I
I
I
Every
node
in
the
blockchain
system
must
have
multiple
prefix,
because
in
the
blockchain
node
had
to
be
a
consumer
and
producer
at
the
same
time,
so
they
must
have
a
relatable
prefix
and
in
order
to
enable
the
in-network,
caching
and
integra
internet
aggregation,
we
need
to
name
the
all.
The
data
object
all
right.
We
need
the
data,
we
need
to
read
the
data
object
and
globally
unique
names,
and
but
if
and
also
the
name
have
to
be
location
independent.
I
So
if
we
have
the
name
with
location
independent,
how
can
we
forward
the
the
interest
to
the
producer
so
in
in
order
to
do
so?
We
have
to
put
the
we
have
to
separate
the
object
name
from
the
forward
information,
so
we're
going
to
put
the
forwarding
information
in
the
forwarding
hand
of
the
interest.
I
So
we
we
use
the
announce
and
pull
data
broadcasting
scheme
for
data
for
blocking
broadcasting,
the
the
data
in
the
blockchain
system.
So
basically,
when
a
node
have
a
new
data,
he
it's
going
to
announce
the
identity
of
the
object
to
appear
to
be
overlay
and
then
the
the
other
parent
they
receive.
The
announcement
they're
going
to
request
for
the
data
directly
from
the
announcer.
I
I
So
in
this
slide,
I'm
going
to
explain
a
little
bit
more
detail
about
how
the
announce
and
proof
data
broadcasting
works.
I
So
basically
in
the
in
in
the
figure
you
can
see
that
for,
for
example,
when
the
the
peer
0
have
a
new
data
like
here's,
a
new
block
or
new
transaction
you're,
going
to
announce
the
data
to
the
other
peer
photos,
the
p1
and
p2,
and
he's
going
to
use
an
interrupt
packet
and
they
he
embed
the
object
identity
in
the
interact
packet
and
send
it
to
the
other
peer
and
the
based
on
the
the
announcement.
The
other
pair
going
to.
I
Create
a
name
for
the
for
the
data
packet
and
they
send
the
request
to
the
announcer
and
request
the
data,
because
the
because
every
data
object
have
a
unique
identity.
Then
all
the
internet
is
going
to
have
the
same
name.
So
we're
going
to
have
the
internet
aggregations
so
announcer
receives
the
interest
you're
going
to
send
the
data
back
to
the
network
and
the
data
that
is
multicast
through
the
audi
requester.
So
when
the
other
period
receives
the
data,
they
have
to
validate
it
before
announce
to
the
other
peer.
I
Okay,
so
in
this
figure
I'm
going
to
explain
the
idea
a
bit
in
more
detail.
So,
for
example,
in
in
in
this
figure
we
have
the
a
network
of
five
of
four
domains
and
we
have
one
blockchain
node
at
the
entry
domain.
So
it
have
a
node
name
like
this
address
slot
devnet
bob
and
when
he
got
a
new
block,
you're
going
to
announce
the
block
to
the
the
other
peer,
for
example
a
at
least
from
ucla.
I
Create
a
name
for
the
for
the
data,
object
and
request
the
data
from
the
announcer,
because
alice
already
know
the
the
mapping
between
the
bob
identity
and
his
and
and
host's
name.
So
he
can
use.
She
can
use
that
information
to
put
it
in
the
forwarding
hand
of
the
interest
so
that
the
interest
can
be
route
to
the
announcer.
I
So
as
when
the
second
peer
request
for
the
the
same
data
see
he
going
to
get
it
from
the
nfd.
I
So
we
implement
our
blockchain
system
on
the
and
the
end.
So
how
do
we
develop
our
system
from
the
glow,
ethereum,
official,
official
client
and
because
right
now,
the
we
and
and
the
encompa
community?
They
do
not
support
the
client
library
in
go
language,
so
we
have
to
develop
a
minimal,
go
and
the
end
client
and
we
have
to
replace
the
p2p
model
in
the
existing
ethereum
client
by
ndn
by
p2p
model,
and
we
designed
and
rep
and
implement
all
the
blockchain,
all
the
block
and
introduction
broadcasting
and
then
synchronization
protocols.
I
And
we
after
we
finish
implementation
and
we
do
some
experiment
on
the
on
our
implementation.
And
I
think
that
our
software
is
quite
stable
now
and
in
this
slide
I'm
going
to
explain
how
we
do
some
performance
evaluation
of
our
blockchain
system.
I
So
we
set
out
a
system
of
five
domains
and
in
in
each
domain
we
have
from
five
to
20,
blockchain
node,
and
so
in
total
we
have
around
25
to
100
nodes
and
we
we're
running
the
blockchain
system,
and
then
we
send
some
transaction
at
the
constant
rate,
and
then
we
measure
the
upstream
and
downstream
traffic
at
every
node
and
we're
taking
the
average
of
the
traffic,
and
we
we
calculate
two
ratios
the
first
one.
Is
we
call
the
traffic
redundancy
ratio,
so
it
is.
I
We
take
the
traffic
divided
by
the
the
size
of
the
blockchain
and
this.
So
this
number
is
the
smaller
is
better
and
the
second
ratio
is
we're
going
to
catching
ratio.
Actually
it's
it's
the
we
take
this,
the
downstream
traffic
minus
the
upstream
traffic
and
divided
by
the
dow
stream
effect.
So
this
number
is.
I
Also,
on
the
on
the
left
hand,
side
is
we
plot
the
traffic,
so
you
can
see
that
in
the
as
a
blue
light,
there
is
the
upstream
and
downstream
traffic
of
the
ip
ips
blockchain,
because
in
in
the
in
the
ip
by
blockchain,
the
upstream
and
upstream
traffic
are
the
same
and
the
red
light
is
for
the
ndn
downstream
traffic
and
the
orange
light
for
the
upstream
traffic
and
the
popular
is
the
size
of
the
blockchain.
I
So
if
you,
you
can
see
that
the
there
are
a
lot
of
reduction
in
the
internal
traffic
is
in
the
any
blockchain.
So
if
we
take
the
ratio,
we
can
see
that
in,
for
example
like
in
in
ip
network,
if
we
want
to
produce
one
megabyte
of
blockchain
we're
going
to,
we
need
to
send
around
nine
megabytes
of
traffic
and
receive
around
nine
megawatt
traffic,
but
on
the
end,
the
end
with
blockchain.
I
We
only
need
to
send
around
less
than
two
megabytes
of
data
and
receive
less
than
four
megawatt
data,
so
in
total
we
reduce
around
you
know:
seventy
percent
of
the
traffic
on
the
anybody's
blockchain
and
on
the
on
the
lower
figures,
we
will
show
the
catching
efficiency
in
the
ending
by
blockchain.
So
on
the
left-hand
side,
we
we
see
the
traffic
and
you
can
see
that
the
blue
light
downstream
traffic
and
the
red
light.
It's
an
upstream
traffic
and
the
the
black
light
is
the
size
of
the
blockchain.
So
in.
I
All
right,
so,
okay,
I'm
I'm
going
to
go
to
the
some
demo
for
our
system.
First.
I
All
right,
so,
okay,
I'm
I'm
going
to
to
show
you
the
demo
of
the
system.
So
basically
I
have
a
system
of
100
blockchain
nodes
running
on
five
domains.
I
I
A
No,
I'm
sorry,
so
you
don't
seem
to
be
sharing
any
window
right
now.
Maybe
you
have
to
set
it
up
again.
I
As
you
can
see
that
you
know
when
I
connect
to
the
node,
I
can
see
that
the
node
right
now
have
around
80
8670
blocks
and
then
I
connect
to
another
node
and
I
can
see
that
they
have
the
same
number
of
nodes
as
you
can
see,
and
then
I
move
to
the
another
window.
I
I
start
a
new
node.
I
I
I
And
you
can
see
the
the
content
of
some
block
and
they
must
be
identical.
I
So
let
me
delete
all
the
previous
data
and
then
here
I'm
going
to
to
generate
a
new
node.
So
basically
I
have
we
have
to
give
the
information
of
the
first
block
the
zenith
block
of
the
blockchain.
So.
I
I
I
I
Okay,
so
so,
then
I
can
check
these
the
pending
transaction
nope
in
each
blockchain
net,
each
blockchain
node.
We
can
see
that
we
have
five
pending
transaction.
I
I
So
I
I
send
five
transactions
to
this
node
and
basically
the
other
not
going
to
request
the
transaction
from
this
node.
But
you
can
see
here
only
we
we
have
only
five
transaction
requests,
so
it
means
that
the
the
interest
was
aggregate
in
the
nfd.
So
so
the
the
node
goal
is
received
exactly.
I
I
I
I
I
I
All
right,
so
do
you
see
my
cd
my
screen?
Now?
Yes,
all
right,
so
in
conclusion,
we
can
see
that
in
the
any
blockchain
we
can
reduce
traffic
and
we
plan
to
have
more
tests
on
the
other
type
of
network
for
the
latency
measurement,
because,
right
now
we
we
cannot
measure
the
latency
of
the
anyone-based
blockchain
and
compare
it
to
the
ipva's
blockchain.
I
So
we
we
plan
to
have
that
is
more
tesla
later
and
we
plan
to
publish
our
software.
The
open
source
package,
all
right
and
one
of
the
things
that
we
we
are
concerning
is
whether
we
we
we
use
the
we
put
the
announcement
in
the
interest
package
where
we
will
be
wondering
whether
it's
a
appropriate
appropriate
way
to
to
send
the
data
in
the
in
the
icn
or
not.
A
So
good
to
see
yes,
we
have
time
for
say
one
or
two
questions
running
a
bit
late.
A
So
I
think
this
was
really
interesting
so
in
terms
of
the
design
decisions
that
you
took
for
the
protocol.
A
So
if
there
are
no
questions
now,
I
think
what
would
be
really
nice
if
you
could
maybe
write
up
your
design,
maybe
in
a
something
like
a
spec
or
a
paper,
doesn't
have
to
be
a
draft
necessarily,
but
I
think
that
would
help
people
to
understand
it
and
yeah
have
a
good
discussion
with
you
all
right,
but
so
in
general.
I
think
many
people
have
been
looking
forward
to
this
because
I
mean
yeah.
A
We
always
had
the
intuition
that
well,
these
gossip
protocols
are
really
inefficient
and
so
an
icn-based
approach
seems
to
be
suitable,
and
so
yours
is
say,
yeah
one
of
the
first
yeah
really
experimental
approaches,
and
that's
really
good
that
you
really
built
the
system
and
were
able
to
test
it.
A
So
I
hope
you
are,
you
know
you
are
on
the
icy
energy
mailing
list,
so
that
may
be
good
for
you
to
get
more
questions.
I
A
Great
yeah,
looking
forward
to
it.
Thank
you
very
much
again.
A
Thank
you,
okay,
yeah,
sorry
running
a
bit
late,
but
this
was
super
interesting
and
it's
going
to
be
interesting
for
the
next
presentation
as
well.
So
we
have
total,
has
together
also
a
very
inconvenient
time
zone.
I
apologize
for
not
optimizing
the
agenda
for
that.
I.
J
J
J
J
Talk
is
the
summary
of
the
hour
public
prepa
on
your
routine,
which
is
published
in
the
activity
transactions
on
network
and
service
management.
This
august
okay.
So
our
presentation
focus
on
the
producer,
anonymity
country.
So
there
are
a
couple
definitions
for
anonymity:
one
is
the
consumer
anonymity
and
the
other
one
is
the
producer
on
mit,
so
a
consumer
limiting
means
that
the
adversaries
cannot
run
through
regrets,
some
specific
content
and
so
producer
anonymity
means
that
adversaries
cannot
run
through
publishing
some
specific
context.
J
So
today's
topics
focus
on
producing
anonymity,
so
risk
possible
scenarios
of
the
these
two
types
of
anonymity.
Exactly
one
is
the
privacy,
sensitive
privacy,
sensitive
application
like
location-based
services.
Dsi,
is
the
censorship
evasion
so,
minister
of
target,
this
is
sensation
of
censorship,
census
prevention
and
many
location-based
studies.
Okay,
so
in
literature,
so
conscious,
anonymity
and
the
produce
anonymity
were
addressed.
J
Several
studies,
the
most
important
one,
is
the
andala
which
was
developed
by
uci.
So
this
is
the
this
work
was
inspired
inspired
by
onion
routing
in
ib.
J
This
is
the
natural
torah
in
at
nba
network,
so
crisp
is
also
the
provides
concealment,
but
it's
not
on
your
routing,
so
it
is
the
p2b
best.
J
So
our
work
is
our
extension
and
so
as
a
raw
provides
group
produce
anomies
through
the
energy
and
actually
attribute
breast
signature
address
anonymity
producer
anonymity.
However,
this
this
study
only
100
equates
only
from
these
signatures.
J
This
is
not
inception
to
completely
achieve
the
produce
anomaly,
so
in
the
in
my
talk,
so
first
we
briefly
explain,
and
then
we
explained
how
to
do
error
produce
another
protocol.
J
And
then
is
the
consumer
anonymity.
J
This
figure
shows
the
a
system
overview
of
antana
in
andalan,
a
consumer
to
deserve
a
series
of
two
anonymized
objects
at
a
state
of
the
anonymized
signatures
called
the
circuit.
Then
a
consumer
shares
the.
J
Encrypt
secret
kit,
both
anonymizing
route-
this
is
apostate
consumer
issues.
Internet
package
food
name
is
encapsulated
in
my
reputable
layers
of
secret
gin
prescription
on
the
circuit.
It
is
very
similar
to
the
drawer
on
its
adjuster
on
your
routing,
then
each
analyzing
router
decrypts
the
interest
packet,
therefore,
to
the
next
anonymized
routers.
J
J
This
is
why
anthony
hd
in
the
end,
I
think
that's,
indians
or
energetic
packets
in
the
packets
do
not
have
any
other
any
id
address.
So
if
the
top
anonymous
anonymizer.
J
J
Okay,
oh,
and
so
we
focus
on
this
advantage
of
our
antana,
so
we
extend
the
antenna
to
onion
lighting
blood
for
consumer
anonymity.
J
Anonymity
is
a
little
bit
different
from
the
receiver
anonymity
in
id,
so
we
should
consider
content,
name
and
signature
so
anyway,
so
we
we
carefully
considered
advantage
so
in
indian
networks
or
adobe
stories
can
create
content
and
its
producer.
J
So,
if
advanced
is
our
compromise
d
on
my
several
anonymizing
literature
adversaries
can
dilute
can
can
develop
the
root
denser
identical
producer.
J
J
We
we
designed
the
onion
rotting
protocol
for
producer
limiting
by
taking
into
account
these
leakage
of
information.
So
there
are
the
goals
of
the
protocol
to
fold
it.
One
is
the
design
designer
system
that
achieves
it.
Produces
anonymity
against
adversaries
to
leverage
content,
names,
signature
and
packet
root.
J
J
J
J
J
So
the
first
fastpass
mechanism
is
that
sir,
a
producer
generativity
her
his
children,
called
on
your
name.
J
Second,
one
is
the
produce
producer.
Ask
another
anonymous
signal:
router,
optimizing
method
to
act
as
a
rendezvous
point.
The
custom
anonymizing
little
align
table
point
between
me
consumer
and
producer.
J
J
J
From
the
producer,
so
next
several
tries
to
see
details
of
the
protocols.
I
will
skip
some
of
them.
Okay,
the
first
one
is
their
own
name
on
your
name.
Is
the
structure
is
like
this
so
component
into
the
result
word.
This
is
the
name.
The
second
one
is
the
hash
values
of
the
public
key
id
of
the
producer.
J
J
Okay,
okay,
okay,
you've,
never
been
by
so
anyways,
so
this
type
of
name
does
not
really
information
on
the
producer,
because
it's
not
it
is
notable,
not
human,
neutral.
J
To
act
as
leave
a
point
by
sending
her
the
onion
name
and
self-certified
signature,
then
at
anonymizing
a
lot
accept
it
if
fashion
has
names
contained
in
the
union
name
is
valid
for
a
public
key
id
in
the
subject
case.
So
one
of
the
problem
is
that
the
producers
cannot
send
this
element
with
the
standard
intel,
intel's
data
exchange.
So
anyway,
we
need
some
kind
of
the
anonymity,
so
producers,
lutheran
blue
winner,
must
be
hidden
to
all
other
entities
to
achieve
produce
anomalies.
So
then,
so
we
definitely
lies.
J
J
J
So
in
that
case
we
use
the
inclusion
eclipses
then
through
intimate
anonymizing
rooters,
that's
not
no
plural,
sender
or
receiver.
J
J
We
use
this
kind
of
mechanism
to
between
the
anonymous
animes
floaters.
Then,
okay,
we
don't
have
so
much
time.
So,
okay,
we
will.
J
We
will
skip
the
details
of
the
protocols
so
so
anyway,
I
skipped
the
publication
retriever
so
anyway,
after
the
encryption
keeps
earned
by
the
producer
remains
single
rounders
and
random
points
again.
D
arise,
for
example,
the
producer
change
the
past
interest
pack
to
the
relative
point
anonymously.
J
J
Finally,
acknowledgement
returned
back
to
the
producer
so
by
using
the
rice
and
encryption
kit,
a
consumer.
J
Producers,
okay,
so
anyway,
so
currently,
so
we
we
are
implementing
the
this
protocol.
So
this
is
the
sample
of
the
preliminary
evaluation
of
the
performance.
This
slide
shows
the.
J
Also
important
thing
important
in
the
importance
of
the
our
production
limited
protocol
is
how
to
fold
it.
One
is
the
fewer
hopes.
The
number
of
not
the
number
of
hops
is
smaller
than
produce
anonymity
protocol
in
ib.
The
other
isd
our
protocol,
or
is
more
resilient
against
three
diseases
attack
in
ib
attack
than
torah
in
ib.
J
So
if
at
the
bottom
is
the
account
probably
looters,
okay
and
okay
also,
if
a
couple
of
uterus
are
compromised
by
a
consumer
linkability
of
client
and
the
server
is
broken
so
in
the
torah
in
the
torah
in
in
hidden,
a
services
are
in
ib,
so
the
adversary
are
compliments.
The
two
routers.
J
Okay,
so
I
thought:
okay,
I'm
sorry!
This
is
not
so
in
the
torah,
so
only
one
doubter,
okay,
the
adversary
must
just
run
a
router
on
the
contrary,
through
in
my
protocol.
So
in
our
protocols,
adobe
must
comprise
my
compromise,
thorough
routers.
J
J
We
are
implementing
the
this
protocol
on
several,
which
is
provided
by
nyst
through
okay.
J
A
So
yeah,
the
first
question
was
by
dave:
is
the
onion
name
the
same
for
everybody
and
if
so,
can
an
adversary
learn
anything
by
seeing
the
requests
for
the
same
onion
name.
J
J
Yeah
so
anyways,
okay,
it
is
wrong.
We
we
did.
We
did
not
check
the
phaser
okay.
I
think
that
this
road,
the
same
name,
is
each
name.
Is
the
english
dancer
medicine.
J
But
so
we
do
not
check
the
protocol
is
resilient
against
clicking
attack.
Note.
J
A
J
Yes
also,
we
need
so
some
advanced
mechanisms
to
send
the
onion
yes,
so
we
we
plan
to
use
some
kind
of
the
these
things.
These.
A
So
just
as
a
reference,
I'm
not
sure
you
are
aware
of
the
ccnx
key
exchange
protocol
that
was
presented
in
ic
energy
years
ago,
actually
be
used
for
setting
up
a
tls
like
security
context.
J
So
currently,
so
we
use
the
very
simple
dp
herman
types
of
the
protocol,
so
we
can
use
the
we'll
see
s
key
excel
protocol,
but
we
don't
check
the
we
can
use
this
yeah.
Okay,
it's.
A
Just
a
pointer
for
you
great
any
other
questions.
A
Okay,
thank
you
very
much
again
appreciate
it,
and
so
we
are
moving
on
to
our
final
presentation
by
chiang
gunwan,
and
I
am
thanking
presenter.
K
K
So
thank
you,
dave
and
doug
for
having
this
slot
on
the
call,
and
this
presentation
is
mostly
recap
and
the
continuation
of
the
work
that
we
did
together
with
christian
amsas,
thomas
schmidt,
matthias
village.
At
me
for
the
icn
20
conference,
where
we
built
a
web
of
things,
deployment
an
rfc,
compliant
type
of
things,
deployment
that
also
displays
information-centric
characteristics
and,
if
you're
interested
in
a
more
highly
technical
talk,
then
you
can
also
look
at
the
pre-recorded
video
that
this,
I
think,
hosted
on
the
icn
conference
website
on
this
topic.
K
So
what
does
the
web
of
things
actually
mean?
For
me,
a
web
of
things
is
deployment
where
we
have
constrained
iot
devices.
They
interconnect
using
a
low
power
and
lossy
network,
even
multi-op
networks
to
more
powerful
gateways,
and
these
gateways
have
an
uplink,
an
uplink
communication
to
cloud
services
and,
of
course,
a
paradigm
that
fits
very
well
in
this
scenario
is
the
rest
paradigm
and
they're
like
two
very
prominent
protocols
for
doing
restful
deployments
on
the
site
on
the
cloud
side.
K
So
between
the
cloud
and
the
gateway
we
have
http
using
tcp
and
on
the
constraint,
iot
and
the
gateway
side,
we
use
co-app
and,
of
course,
how
to
secure
such
a
deployment
using
transport
layer
security
where
we
use
tls
for
http
because
of
tcp,
and
we
use
dtls
for
core
because
of
udp.
K
So
the
deployment
as
this
faces
many
many
challenges.
I
picked
two
of
those
and
put
on
the
slides.
So
one
is
the
network
resilience
and
these
deployments.
K
It
may
be
cross
traffic
or
radio,
interferences
or
other
things
like
exhaustive
buffer
space,
and
these
leads
to
re-transmissions
on
upper
layers,
in
this
case,
for
example,
co-operative
transmissions,
which
then
again
lead
to
more
network
stress
because
of
the
added
overhead,
the
packet
overhead,
and
since
we
are
using
an
end-to-end
communication
here
in
co-op,
we
see
that,
of
course,
re-transmissions
have
to
traverse
from
the
origin
to
to
the
endpoint.
So
it
has
to
traverse
all
the
links
again
for
each
retransmission.
K
Another
problem
is
the
end-to-end
security.
As
soon
as
we
have
gateways
that
terminate
security,
we
have
to
also
include
them
into
the
trust
infrastructure,
so
it
gets
more
complicated
to
distribute
the
keys
and
to
decide
which
gateway
to
trust
or
to
not,
and
of
course,
especially
dtls.
We
have
this
added
overhead
of
creating
or
establishing
a
session
for
for
iot
notes
when
they
change.
Endpoint
information
like
appearance,
supports.
K
We
saw
I
mean
there
were
like
many
research
papers
in
the
previous
years
that
that
show
that
information-centric
properties,
in
this
case
stable,
forwarding,
caching
and
object
security,
can
try
to
reduce
the
burdens
of
the
of
the
problems
we
see
and
face
in
those
deployments,
for
example,
the
staple
forwarding
and
caching.
They
shorten
the
request
path
and
reduce
link
traversals
on
retransmissions
and
the
object.
Security
can
help
with
the
end-to-end
security
and
we
don't
need
to
have
session
establishments.
K
So
we
have
this
paper
at
the
icn20
that
I
just
mentioned,
and
there
we
tried
to
figure
out.
Okay,
can
we
use
the
benefits
of
these
icn
characteristics
in
co-op
deployments
and
we
try
to
figure
out
the
building
blocks
that
we
would
need
to
to
build
such
a
thing,
and
we
came
up
this
with
this
list.
So
if
you
look
deeper
in
the
core
communication
model,
then
we
see
there's
like
a
couple
of
methods
that
co-op
defines
similarly
to
to
http.
So
we
have
the
co-op
get
method.
K
When
we
look
at
the
the
standard
rfc
of
co-op,
we
see
that
they
define
an
entity
called
proxy
that
forwards
requests
and
that
relay
back
data,
packets
or
responses
the
same
way.
So
it's
very
similar
to
what
ndn
is
doing
anyway,
when
forwarding
so
we
basically
store
state
in
the
proxy
and
can
send
responses
back
on
the
path
and
proxies
also
have
the
ability
to
do
caching,
which
of
course,
is
very
handy,
because
ndn
also
is
doing
that,
and
then
we
have
object
security
for
for
co-op.
This
is
a
relatively
new
rfc
called
oscore.
K
It's
from
last
year
july.
I
think,
provides
authenticated
encryption
and
has
methods
or
features
like
confidentiality,
integrated
integrity,
request,
response,
a
strong
request,
response,
binding
and
is
doing
repliability.
K
Or
yeah
tries
to
to
disallow
replayability
attacks,
and
so
with
these
building
blocks,
we
asked
ourselves.
Okay,
what
can
we
do
with
what?
How
can
we
form
such
a
deployment
and
what
we
did
is
we
took
a
multi-hop
topology
and
we
configured
each
constrained
iot
device
and
this
topology
to
be
actually
a
co-op
proxy
that
forward
requests,
and
then
we
enabled
the
caches
and,
of
course,
encrypted
and
authenticated
all
the
the
messages
using
oscore.
K
So
on
the
right
side,
you
see
this
figure
where
you
can
see
a
co-op,
get
a
message
that
originates
from
from
the
red
node.
This
get
message
is
received
by
a
forwarder.
It
creates
state.
It
then
sends
out
a
new
get
message
to
the
next
forwarding
node
and
this
repeats
until
we
hit
the
content
producer
and
the
content
producer
produces
the
content
which
is
then
stored
on
on
each
hop
in
the
cache.
If
we
have
a
loss
packet
loss,
then
we
do
the
request
retransmissions,
and
this
looks
very
very
similar
to
what
ndn
is
doing.
K
So
if
you
look
at
the
network
stack
itself
how
it
looks
like
we
can
see
on
the
left
side,
the
ccnx
and
ndn
stack
in
this
case
we
use
an
adaptation
layer,
here's
icn,
open
and,
on
the
right
hand,
side.
You
see
the
new
network
stack,
or
I
mean
the
itf
envisioned
netflix
takes
for
for
the
iot,
using
co-op,
based
on
udp,
ipv6
and
six
open
and
on
the
application
layer.
K
We
have
now
co-op
using
proxies
and
and
oscore,
and
what
we
now
do
is
we
basically
rebuild
the
things
that
we
had
on
the
network
layer
for
the
ccn
and
the
end
stack
on
the
application
layer,
and
now
we
since
co-op,
is
using
uris
to
to
request
and
and
to
return
the
content.
We
have
the
forwarding
based
on
names,
which
is
also
a
similarity
to
a
ccn
and
ndn,
but
we
have
of
course
be
yeah.
K
I
mean
we
have
to
be
careful
because
in
co-op
we
don't
have
static
content,
so
name
a
name
or
a
request
can
return
different
content,
for
example
different
temperature
values.
K
Here's
actually
a
bonus
thing
that
or
some
something
that
surprised
me
when
I
built
the
system
and
did
the
experimentation.
I
looked
in
the
pkp
files
and
I
saw
that
actually,
the
the
core
packets
got
much
smaller
and
I
asked
or
looked
at
the
peaker
files
and
tried
to
find
out.
Why-
and
the
interesting
thing
is
that
ipv6
compresses
ip
source
and
destination
addresses
if
they
are
linked.
Local
addresses
and
I
use
link
local
because
we
have
this
chain
of
forward
forwarders
proxy
forwarders.
K
K
K
It
has
a
reduced
latency
and
it
is,
of
course,
there's
a
location
independence
of
data
like
in
ndn,
but
then
we
asked
do
we
also
gain
like
insights,
that
we
can
use
for
the
ccn
or
in
the
end
world,
and
this
is
something
that
we
not
really
highlighted
in
the
paper
itself,
but
we
summarized
a
little
bit
and
things
that
we
came
up
with.
Okay,
there's
an
early
deployment
chance.
If
we
have
an
icn
or
a
co-op,
that's
like
an
icn.
K
We
could
actually
use
caching
strategies
or
forwarding
strategies
that
are
like
that
are
designed
for
in
the
end.
Why
not
use
that
in
co-op
deployments
which
are
already
running,
and
then
there
are
two
features
that
co-op
implements
and
and
yeah
also
like
mentions
in
the
rfc,
for
example,
there's
like
the
response
acknowledgements.
So
what?
K
But
then
we
have
I
mean.
Then
we
need
response
acknowledgements,
because
then
we
can
put
send
back
the
the
response
as
soon
as
we
have
the
data
instead
of
being
pulled
by
the
re-transmissions.
K
But
of
course,
then
we
need
the
response,
retransmission
and
acknowledgement
in
case
that
gets
lost
and
there's
also
the
efficient
cache
revalidation
using
an
etag
in
co-op,
it's
very
similar
to
what
http
is
doing.
So
if
we
have
a
cache
with
stale
cache
entries
and
an
interest
tries
to
get
a
content
from
that
cache,
the
the
cache
or
the
node
can
actually
ask
the
the
producer
of
that
of
the
content.
K
K
So
we
also
have
ongoing
efforts
and
we
are
mostly
interested
in
the
multicast
and
because
ccnx
and
ndn
inherently
support
the
multicast
and
we
might
be
able
to
also
transfer
that
to
co-op
itself
at
the
end
is,
of
course,
using
request,
aggregation
and
response
deduplication,
and
why
not?
You
also
use
that
for
co-op.
K
I
think
it
is,
should
be
quite
easy
to
do
this
for
content
that
is
static,
but
it's
probably
more
complicated
for
content
that
changes
yeah.
So
the
then
the
next
step
would
be
to
evaluate
group
communication.
I
mean
the
protected
group
communication,
for
example
using
group
oscor
and
see
how
that
works
out
in
or
I
mean
in
this
kind
of
co-op
deployments,
and
how
does
it
affect
the
caches
of
protected
or
the
cache
ability
of
protected
messages?
K
K
A
So
that's
so.
I
would
really
appreciate
that
you
picked
up
the
nice
discussion
at
the
icn
conference
and
came
back
with
with
a
follow-up
presentation.
That's
exactly
what
we
had
hoped
for.
So
thanks
stagnant
was
really
nice.
A
So
while
people
are
still
thinking
about
their
questions,
so
I
I
will
repeat
my
comment
from
the
conference,
so
so,
if
you
can
go
back
to
slide
six,
for
example,.
A
Here,
no,
no,
what
wasn't
six
maybe
move
forward.
So
just
one
where
you
show
the
network.
Maybe
this
one
is
fine,
actually
yeah,
that's
one!
So
I'm
not
sure
we
are
really
comparing
apples
to
apples
here,
because
it
seems
to
me
that
in
this
network
here
you
have
something
like
like
a
pre-configured
proxy
chain.
So.
A
Any
approach
you
would
probably
I
mean,
as
for
example,
in
in
you
and
your
other
paper
in
the
end
in
the
wild
and
for
it
iot,
you
would
have
a
maybe
a
like
a
forwarding
plane
that
is
able
to
you
know,
find
next
hops
itself
and
so
on.
I
K
I
mean
that's
true,
I
mean
at
least
for
the
evaluations
of
this
of
this
paper.
K
We
used,
like
you,
said
the
pre-configured
forward
chain,
but
I
think
it
should
be
quite
easy
to
have
a
like
a
discovery
protocol
that
tries
to
I
mean
there
are,
for
example,
discovery
protocols
in
co-op
itself,
using
like
a
resource
directory,
I'm
not
yet
sure
how
we
would
be
able
to
integrate
this
into
these
kind
of
deployments,
but
I
don't
see
huge
showstoppers
I
mean
yeah,
I
mean,
of
course
we
need
to
think
about
this,
whether
it
would
work
to
have
a
dynamic
topology
management.
B
And
then
I
have
one
question:
yeah.
Okay,
to
what
extent
do
you
think
the
immutability
properties
that
we
tend
to
hold
dear
in
icn
affect
some
of
these
design
decisions.
K
K
K
K
A
Okay,
and
so
regarding
your
potential
future
directions,
I
mean
one
what
say
one
outcome
of
this
could,
so
it
could
be
that
in
your
end,
you
basically
re-implement
the
icm
node
behavior,
mostly
but
you're,
just
using
co-op
as
a
you
know,
different
packet
format
or
convergence
layer.
If
you
want.
B
K
Yeah
I
mean
this
reimplementation,
I'm
not
yet
so
sure
about,
because
everything
we
used
up
until
now
is
rfc
compliance,
so
I
mean
it's
already
there.
The
building
blocks
that
core
provides
us
allowed
us
to
to
build
such
kind
of
systems,
and
why
not
use
these
things?
I
mean.
I
don't
think
that
when
the
designers
of
co-op
made
the
specification,
I
don't
think
that
they
had
the
intention
of
using,
for
example,
proxies
on
every
hop
of
the
network.
K
K
If
you
want
to
keep,
if
you
want
to
make
a
further
more
like
icn,
what
else
we
need
yeah,
let's
see
I
mean.
B
A
lot
of
what
you
talked
about
with
re-transmissions
and
acting
data
seem
to
me
to
be
oriented
toward
reclaiming
buffer
space
in
the
intermediaries,
whereas
my
hop
retransmission
of
interests
in
in
icn
kind
of
obviates
that
and
your
cash
management
is
based
only
on
arrival
of
data.
K
That's
that's
true
yeah
I
mean
there's
this
one
use
case
I
try
to
highlight
here:
retransmission
also
costs,
of
course,
bandwidth
and,
and
they
also
reduce
radio
interference.
B
And
I
think
well,
I
would.
I
wouldn't
mention
it's
easy
to
it's
easy
to
produce
pop
by
hop
acknowledgements
for
ndn
right.
The
question
is:
what
would
an
an
end-to-end
acknowledgement
doesn't
really
make
any
sense,
because
then
the
producers
know
things
like
consumer
count,
and
you
know
what
does
it
mean
when
to
acknowledge
the
data
that
may
have
reached?
You
know
a
hundred
consumers?
B
A
Well,
okay,
I
mean,
I
think
this
is
an
interesting
direction
and
I
mean
we
are
looking
at.
You
know.
Looking
at
this
iot
space
from
different
perspective,
I
mean
the
other
perspective
is
certainly
to
you
know,
build
something
from
the
ground
up
that
you
know
it
does
what
iot
applications
need
without
any
legacy
and
well.
I
think
it's
going
to
be
interesting
to
kind
of
compare
these
two
tracks
at
least.
A
One
second
yeah:
okay:
this
brings
us
to
the
end
of
our
agenda.
Yeah
thanks
everybody
for
staying
for
so
long.
A
We
actually
didn't
expect
that
we
I'm
really
filled
these
three
hours,
but
we
did
the
next
time
we
should
plan
in
a
break
if,
if
we
take
that
long
or
split
it
up
into
two
meetings,
but
in
any
way
that
was
really
interesting
and
thanks
everybody
for
presenting
and
for
participating
in
the
discussion
just
a
few
things
so
yeah,
let's
continue
using
the
mailing
list
for
for
technical
discussions
as
well.
A
So
I
think
there
were
probably
some
questions
raised
today
that
we
didn't
answer,
and
so
I
think
that's
still
a
good
good
way
to
to
discuss
this
other
than
that
we
used
to
have
an
effort
for
organizing
a
draft
review
so
like
an
online
spreadsheet.
I
think
dave
and
I
will
have
to
revive
this
and
send
a
message
about
that
future
meetings.
A
So,
as
you
can
imagine,
you're
probably
not
going
to
meet
face
to
face
anytime
soon,
so
there
will
be
more
online
meetings
if
you
have
any
ideas,
also
feedback
regarding
today.
So
what
how
to
you
know
do
things
differently?
A
Please
talk
to
us,
so
one
thing
that
we
hope
will
do
soon
is
another
focused
meeting
on
flick,
as
christian
mentioned
in
the
beginning
and
so
yeah.
Let's
announce
this,
of
course,
and
everyone
is,
would
be
welcome
to
attend
that,
okay
other
than
that
thanks
again
and
yeah.
Please
stay
safe
everybody
and
hope
to
see
you
again
soon
thanks.