►
From YouTube: IETF113-ICNRG-20220325-1130
Description
ICNRG meeting session at IETF113
2022/03/25 1130
https://datatracker.ietf.org/meeting/113/proceedings/
A
And
audio
video
and
so
on
and
there's
one
that
is
labeled,
probably
sharing
slides
or
something
well,
I
asked
to
share
slides.
A
Right
so
I
need
to
just
right:
share
preloaded,
slides
and
then
you
can
you
get
a
list
and
then
you
select
your
presentation
and
then
you
have
the
control
and
can
can
move
forward
and
so
on.
B
C
A
Okay,
yeah
welcome
everybody
to
ic
energy
at
iatf,
113
in
vienna,
special
greetings
to
our
friends
in
the
room,
thanks
matthias
for
being
our
on-site
co-chair
today,
it
doesn't
look
like
we
will
have
lots
of
questions
from
the
room
today,
but,
let's
see
so
matthias
is
going
to
manage
that
in
case
there's
something
to
be
done.
Hello,
thomas.
I
see
you
in
the
back,
okay,
yeah,
I'm
the
coach
and
my
coach
is
dave
iran,
and
so
I
think
you
probably
have
seen
this
light
now
on
day.
A
Five
of
the
meeting,
but
yeah
so
just
make
sure
that
you
are
using
the
the
mid
echolite
tool
if
you
are
in
the
room
and
if
you're
not
talking
or
presenting
turn
off
your
mic
and
video.
As
always,
of
course,
this
session
is
recorded
just
quickly.
A
We
still
apply
the
irtf
ipr
rules,
so
let
us
know
so
in.
In
summary,
let
us
know
if
you
there's
anything
ipr
related
that
you
see
being
discussed
here
right.
So
everything
is
recorded.
The
irtf
is
using
the
itf
privacy
and
code
of
conduct
rules
and
we
also
have
anti-erasmus
procedures.
So
if,
in
case
of
any
issue,
we
have
an
ombuds
team
that
you
can
can
contact.
A
And
quick
reminder,
so
we
are
doing
research
here
and
not
standards
development.
The
best
we
can
do
is
like
publishing,
experimental
and
specifications
as
rc's.
So
the
idea
is
in
general
to
enable
research
and
generate
insights
for
the
ietf
or
the
general
internet
community.
A
Okay,
today
we
are
supported
again
by
jose.
Thank
you
very,
very
much
really
appreciate.
So
my
real,
as
you
say,
is
taking
the
notes
in
the
hedge
doc
note-taking
system.
I
pasted
the
link
into
the
chat
if
you
want
to
follow
along
or
assist
in
the
note-taking,
you
are
welcome
to
do
so,
and
this
is
our
agenda.
So
we
have
a
really
nice
agenda
today,
so
we
are
not
going
to
to
read
this
just
asking.
A
Okay,
great
so
before
we
get
to
edmonds
presentation,
let's
just
quickly
check
where
we
are
in
the
group,
so
we
recently
had
a
really
nice
flow
of
outputs
from
ic
energy
so
being
published,
and
we
have
a
few
more
that
we
want
to
get
out.
A
So
ccn
info
is,
I
think,
still
waiting
here
for
irsg
reviews.
We're
going
to
talk
about
flick
later
icn
lte
basically
went
through
the
ihg
conflict
review
and
the
authors
just
published
a
new
version
and
as
far
as
I
understand,
what
colin
is
asking
for
right
now
is
a
statement
on
the
ipr
situation.
So
would
be
great
if
the
authors
could
really
clarify
that
as
soon
as
possible
to
get
this
published
soon.
A
And
I
see
colin
joining
the
queue.
D
All
right,
yeah,
just
just
quickly
on
that
one,
the
the
issue
there
is
that
there
was
an
api
declaration
on
the
individual
draft
before
it
was
adopted
as
a
research
group
draft,
which
hasn't
been
right
for
the
the
research
group
draft,
so
we're
just
waiting
on
clarification,
whether
that's
still
relevant
or
or
if
the
draft
has
changed.
So
if
it
isn't.
D
D
All
I
I
was
saying
there
was
the
the
ipr
issue
is
that
there
was
an
ipr
declaration
of
the
individual
draft
before
it
was
adopted
by
the
research
group
draft,
which
hasn't
then
been
reflected
onto
the
research
group
draft.
So
we're
just
looking
for
clarification
on
whether
that
that
still
applies,
and
if
so
it
needs
to
be
updated
for
the
research
group.
Drift.
A
Yeah:
okay,
thanks
dirk
crossan,
is
at
the
microphone.
F
Yeah
hi,
as
I
mentioned
to
to
colin
because
he's
been
pulling
the
authors
and
while
I
have
no
ipr
declaration,
obviously
the
draft
was
initiated
when
I
was
with
my
previous
company.
The
problem
is,
as
you
may
recognize,
given
the
current
situation,
I
can't
query
with
the
company
directly.
They
won't
reply
to
any
email
that
I
send
directly.
So
we've
tried
to
figure.
We
need
to
figure
out
how
to
get
in
touch
with
in
the
digital
to
figure
out.
If
there's
any
ipr
that
may
be
pending.
I
can't
declare
I
mean
for
my
side.
F
D
I
I
I
I
I
can't
if
you
think
it's
necessary
that
they've
already
made
an
ipr
declaration
for
that
draft.
So.
A
Okay,
yeah
thanks
for
that,
and
then
you
may
have
noticed
that
there
have
been
new
versions
of
icn
ping
and
I
seen
traceroute
recently.
So
we
want
to
publish
these
specifications
soon
as
well
and
so
right
now
they're
waiting
for
the
shepherd
oops
and
that's
me
so
I'll-
take
care
of
this
very
soon.
A
And
david,
do
you
know
the
status
of
the
nis
architecture?
Considerations,
draft.
G
H
A
So
I
think
this
is
this
is
actually
done
so
we're
waiting
for
it
to
be
published.
So
maybe
dave
can
come
back
later
and
clarify
any
questions
that
may
arise.
D
A
A
All
right,
okay,
so
edmund
are
you
ready.
E
I
Right
great,
thank
you
dirk
and
thank
you
dave
for
inviting
me
to
present
today
for
the
itt
energy
meeting,
so
I'm
edmund
yey
from
northeastern
university,
and
this
talk
is
about
a
project
that
we
have
been
pursuing
for
the
past
more
than
a
year.
I
It's
called
ndn
for
data
intensive
science
experiments
indeed
to
for
for
short,
so
so
this
is
actually
a
follow-on
to
a
project
which
is
called
sandy
which
started
in
2017,
and
I
believe,
two
years
ago
I
made
a
presentation
to
icnrg
about
sandy
and
indies
and
of
course,
at
that
time
it
was
more
preliminary.
I
I
Essentially,
you
have
a
situation
where
there
is
an
enormous
amount
of
data
being
taken
either
at
one
or
or
a
number
of
different
locations
around
the
world
in
the
case
of
large
large
hadron
collider.
Of
course,
the
data
is
being
taken
in
cern
and
in
in
geneva
and
that
information
then
have
to
be
distributed.
Has
to
be
distributed
around
the
world
to
a
couple
of
different,
a
couple
of
hundred
different
institutions
for
analysis,
and
these
are
very
big
data
volumes
that
have
to
be
distributed
and
they're
distributed,
distributed
for
analysis
purposes
and
computation
purposes.
I
And
so
we
we
had
started
this
collaboration
almost
five
years
back
and
there
exists
actually
a
problem
in
this
area,
because
because
the
data
volumes
here,
for
instance,
for
large
hadron
collider,
are
said
to
grow
very
fast,
almost
10
times
due
to
high
luminosity,
with
what
are
so
called.
I
Luminosity
experiments
coming
up
they're
already
on
the
in
the
order
of
exabytes
and
is
said
to
grow
another
10
times
in
the
next
couple
of
years,
and
even
given
the
considerable
resources
that
lhc
network
has,
they
are
still
going
to
have
a
lot
of
problems
in
handling
this
data
volume
and
distributed
around
the
world.
So
they
actually
sought
out
ndn
and
particularly
our
group,
to
work
with
them
to
build
a
new
system
for
them
as
a
future
system
that
can
distribute
this
data
more
effectively
for
the
lhc
community.
I
I
I
We
also
have
a
participation
from
susmit
chanegrahi
from
tennessee
tech
and
susmit
has
been
working
with
ndnt
for
a
long
time
also,
and
he
had
previously
worked
in
them
in
the
climate
area,
using
ndn
for
climate
and,
of
course,
we're
in
partnership
with
lhc
the
genomics
collaborators
and
the
ndm
project
team,
as
at
large.
I
We're
also
interested
in
this
particular
project
on
genomic
data,
although
that
work
is
still
more
preliminary,
because
compared
to
the
work
for
high
energy
physics,
there
is
a
need
to
use
diverse
computation,
storage
and
networking
resources
to
meet
the
challenges
that
have
been
posed
by
by
these
data.
Intensive
science
fields-
and
I
think
during
my
last
talk
two
years
ago,
I
outlined
the
need
to
essentially
build
a
system
which,
with
an
architecture
which
is
more
appropriate
for
the
needs
of
these
applications,
which
are
very
much
data
centered
around
data.
I
So
the
traditional
architectures,
which
centers
on
connections
and
servers
and
processes
are
not
especially
well
suited
for
these
applications,
whereas
we
believe
that
ndn
is
well
well
suited
and
what
we're
doing
is
to
build
a
data
centric
ecosystem
for
providing
agile,
integrated,
interoperable
solutions
for
heterogeneous
data,
intensive
domains
that
that
is
sort
of
the
overall
goal
and
indies
is
an
important
project
in
in
furthering
this
goal.
I
want
to
make
sure
that
I
yes,
so
what
are
the?
What
are
the
goals
of
nds?
I
In
particular,
it
is
to
deploy
and
commission
the
first
prototype
production-ready
indian-based
pedo-scale
data
distribution,
caching,
access
computation
system
serving
major
science
programs,
so
it
actually
has
an
ambitious
goal
of
really
putting
in
a
system
that
can
work
for
the
intensive
science
based
on
ndn
led
high
energy
physics
is
the
leading
target
use
case.
I
Biogenome
human
genome
projects,
atlas,
lsst,
ska,
are
future
use
cases.
We
want
to
leverage
ndm
protocols,
high
throughput,
forwarding,
caching
methods,
containerization
techniques,
integrated
with
sdn
methods
and
fpga
acceleration
subsystems
to
deliver
lhc
data
over
wide
area
networks
at
throughputs
approaching
130
gigabits
per
second.
I
So
we
want
to
really
build
a
system
that
delivers
data
over
a
real
wide
area
network
and
a
lot
of
the
work
actually
is
is
in
interfacing,
with
entities
such
as
internet
to
es
net
to
actually
put
this
network
together
and
test
it
over
the
geographical.
I
mean
it's
basically
cross
country
right
now
in
the
united
states,
we
would
like
to
dramatically
decrease
download
times
by
using
optimized
caching
we're
building
an
enhanced
land
testbed
with
high
performance,
ndn
data
cache
servers.
I
All
right,
so
let
me
this
talk
is
really
about
updating
the
progress
of
nds,
what
we
have,
what
have
we
done
over
the
last
year
or
so
so
I
would
like
to
talk
about
a
few
things
that
this
team
has
been
doing.
I
The
I'd
like
to
discuss
the
indies
deployment,
architecture
and
ndnc,
which
is
an
integration
of
ndn
cxx
with
ndnd
pdk
for
the
purposes
of
the
indies
project,
I'd
like
to
discuss
the
when
testbed
and
the
throughput
tests
that
we've
been
performing
and
also
the
experiments
that
we've
been
doing
with
optimized
caching
and
forwarding
I'd
like
to
discuss
the
congestion
control
work
that
is
currently
ongoing,
and
you
know
in
the
ucla
group
and
how
that's
interfacing
with
caching
considerations
and
also
discuss
the
fpga
acceleration
work
that
professor
kong's
group
has
has
been
doing
at
ucla
all
right.
I
So
let's
first
talk
about
the
deployment
architecture
in
ndnc,
so
here's
the
basically
the
deployment
architecture
and
to
me
this
looks
a
little
bit
small.
Actually
I
hope
you
guys
are
gonna
see
it
see
it
full
screen
here.
I
So
the
you
can
see
that
we
have
basically
on
the
left
here.
You
can
look
at
it
as
an
in
the
consumer
side
and
on
the
right
hand,
side
is
the
producer
side.
We
have
developed
a
containerized
setup
where
you
see
that
we're
building
docker
containers,
which
enables
us
to
make
this
work
for
various
kinds
of
operating
systems,
and
these
cm
ssw
jobs
are
jobs
which
are
generated
for
within
the
lhc
system.
So
these
are
jobs
which
require
that
require
certain
calls
to
certain
data
sets
and.
I
Xrd
law,
the
x3d
plug-in,
is
essentially
a
plug-in
that
interfaces
with
the
between
the
the
calls
within
the
lhc
networking,
environment
and
and
then
turns
these
requests
into
essentially
ndn
consumers
and
from
there
we
have
a
a
set
of
functions
which
are
implemented
using
extensions
of
of
ndn
cxx
and
interfacing
within
the
adpdk.
I
The
indian
dpdk
forwarder
through
something
called
mmf,
and
we
then
go
through
a
network
which
is
a
high
speed
network
having
capacity
up
to
100
gigabits
per
second.
In
the
right
hand,
side
you
see
essentially
the
producer
side
and
operating
finally
interfacing,
with
a
number
of
different
services
being
provided
by
the
lhc
system.
I
I
I
And
the
the
vector
pro
the
vector
packet
processing
within
indian
dpdk,
so
I'm
going
to
skip
some
of
the
details
here,
the
so
the
the
each
fake
transmitter
receive
one
or
many
of
these
blocks
in
a
single
burst
and
it
offers
pit
token
support,
which
is
something
that
is
needed
by
ndndpdk,
and
it
uses
this
ndnc
access
library
to
encode
and
decode
level,
layer,
2,
layer,
3
packets.
I
This
is
something
that
we're
still
playing
around
with
a
little
bit
to
see
how
these
congestion
windows
should
be
set,
but
this
is
built
in
the
ndnc
system.
Future
plans
for
ndnc.
I
We
like
to
do
extensive
benchmarking
to
understand
the
current
behavior
and
maximum
throughput
performance
that
can
be
achieved,
identify
guidance,
possible
bottlenecks,
add
multi-threaded
support
to
mammoth
and
pipelines.
Currently,
that's
that's
a
single
threaded,
as
we
mentioned
over
here,
and
we
would
like
to
port
the
ndn
x4d
plug-in
developed
in
our
previous
project
to
ndnc,
and
we
want
to
extend
the
number
of
applications
to
cover
some
of
these
services
that
are
provided
by
the
lhc
network
and
the
file
systems.
They
have
okay.
I
So
let
now
let
me
talk
about
the
when
test
bed
and
the
throughput
test
that
we've
been
doing
so
okay
here.
So
here's
the
test
bed
that
we've
put
put
together
and-
and
in
fact
you
know
putting
together
this
test
pad-
is
actually
in
a
major
time,
major
time
time
zone
for
us.
But
you
know
it's
it's
worthwhile,
because
this
is
the
high
performance
testbed
that
we
wanted
to
have.
I
So
it
has
a
number
of
nodes
here.
Basically,
it
involves
the
participant
institutions
of
so
northeastern.
The
node
is
actually
sitting
in
mgh
pcc,
which
is
a
shared
computing
facility
between
mit,
harvard
northeastern
and
bu.
There's
a
note
at
ucla
there's
noted
tennessee
tech.
There
are
just
multiple
machines
at
caltech
and
we
are
also
very
happy
to
have
the
collaboration
of
starlight
in
chicago.
I
So
you
see,
this
is
a
topology.
Essentially
it's
a
fully
connected
network
here
and
with
different
link
capacities.
Here
we
have
a
very
high
capacity
links
of
100
gigabits
per
second
between
northeastern
all
the
way
to
caltech,
and
then
some
of
the
links
are
less
smaller
capacity,
10
gigabits
per
second-
but
these
are,
you
know
it
says:
high
pretty
high
performance
network
and
running
across
the
country
here,
and
you
also
see
the
specifications,
the
configurations
for
the
machine
sitting
on
this
network.
J
Hi
hi
edmund
hi
are
these
running:
are
these
all
over
internet
2
and
is
it
l2
stitching,
yeah.
J
I
We
we
put
this
together
with
a
lot
of
collaboration
from
internet
to
esnet
and
and
yeah.
This
is
running
at
layer,
two
yeah.
So
we
have.
We
have
these
vlans,
the
the
numbers
you
see.
There
are
vlan
numbers
which
have
been
provisioned
and
in
fact
you
know
just
doing
the
provisioning
the
vlans
sometimes
can
take
months
to
accomplish.
J
I
Yes,
so
some
of
those
machines
are
shared,
but
some
of
the
I
mean
so
the
machines
that
sitting
at
the
participant
institutions,
they're
all
dedicated
the
machine
out
starlight,
I
I
think
is-
is
shared,
but
it's
not
heavily
used
by
other
applications.
Yeah,
okay,
thanks
mm-hmm
yeah.
So
so
this
is
the
network
that
we're
currently
experimenting
over,
and
I
I
you
know
it's
it's
an
incredible
actually
resource
that
we
actually
have
here,
because
we're
able
to
do
real
experiments
on
a
wider
network
basis.
I
It
also,
of
course,
requires
a
lot
of
upkeep
for
the
throughput
test.
I
won't
go
through
all
the
details
here,
but
basically
we
actually
wanted
to
do
the
throughput
test
from
on
the
100
gigabit
link
from
northeastern
to
caltech,
but
because
of
various
reasons,
the
the
vlan
from
starlight
to
to
caltech
was
only
recently
put
up,
so
we
ended
up
doing
the
the
throughput
test.
I
Basically
on
that
that
that
orange
link
here
that
you
see,
but
basically
what
that
does
is
it
sets
up
a
loop
at
starlight
between
two
machines
setting
a
satellite.
The
path
actually
goes
from
starlight
in
chicago
to
canada
and
then
back
to
starlight,
so
the
but
but
but
we
do
have
a
consumer
and
producer
on
two
different
machines
and
the
capacity
of
that
link.
That
path
is
basically
40
gigabits
per
second,
according
to
iperf,
and
you
can
see
the
configurations
of
the
machines
there.
I
We
have
two
threads
for
the
each
consumer
application
and
running
three
forwarding
threads.
Launching
six
consumers
simultaneously,
currently
we're
using
a
fixed
window
size
for
congestion,
and
there
are
various
reasons
for
that.
Well,
we
have
a
dedicated
path
there,
basically
and
we're
sending
the
files
which
are
one
gigabyte
to
each.
I
These
are
evenly
allocated
under
three
forwarding
threads
and
we
are
caching
these
these
these
files
at
the
producer,
so
they
can
access
them
quickly
and
we
request
them
from
the
the
consumer
machine.
I
So
what
we're
getting
here
and
we're
really
trying
to
push
the
throughput
on
this
path
here
is,
for
you
can
see
that
over
a
span
of
six
minutes,
so
the
average
throughput
over
the
different
consumers
has
listed
it
for
the
total
of
about
21
gigabits
per
second,
that's
the
highest
numbers
that
we
got
we're
still
playing
around
with
the
different
configurations
in
this
in
this
setup,
but
this
is
currently
the
throughput
numbers
that
we've
been
getting
on.
This
is
all
over
a
wide
area
network.
I
This
is
a
real
wan
network
over
continental
distances
and
so
far
I
think
that's
the
highest
number
that
we
have
gotten
over
the
past
few
years.
We
previously
had
gotten
some
numbers
more
in,
I
think
at
sc
19.
We
had
6.7
gigabits
per
second
and
at
sc
21
we
achieved
about
14..
I
So
so
you
know,
we
are
definitely
pushing
that
that
throughput
and
understanding
better
what
kind
of
what
the
system
is
capable
of.
Of
course,
this
is,
you
know,
running
the
consumer
producer
that
that
we
have
that
we
have
written
and
working
with
integrated
with
nd
dpdk,
and
you
can
show
that
really
ndn
is
capable
of
high
performance
over
real
wide
area
networks.
I
Okay,
now
let
me
talk
about
caching
and
forwarding,
and
so
this
is
also
a
big
part
of
the
project.
I
Is
that
to
try
to
test
in
in
in
a
real
experimental
setting
these
caching
algorithms
that
were
first
developed
by
my
group
at
northeastern,
so
we're
using
this
testbed
in
two
different
here,
I'm
showing
two
different
sets
of
results,
one
where
we
have
two
consumers
at
ucla
and
starlight,
and
going
through
a
forwarder
at
northeastern
and
then
ending
with
a
producer
at
caltech
and
another
one
with
two
forwarders
in
the
middle
one
at
northeastern,
one
at
tennessee
tech,
and
we
have
30
files
which
each
file
is
4
gigabytes
and
we
we
generate
requests
at
the
consumers
according
to
a
zip
distribution,
and
we
run
this
caching
algorithm,
which
have
been
developed
by
my
group
called
vip,
and
we
will
then
compare
the
performance
of
that
to
a
case
where
you
do
not
cache
anything
or
use
a
kind
of
lru
kind
of
caching
or
a
an
improved
version
of
that
which
is
actually
within
the
indian
dpdk
implementation
called
arc.
I
So
these
are
a
real
experiment
results
here.
What
I'm
plotting
is
the
different
colors
corresponding
to
the
different
file
indices,
so
file,
one
is
the
most
popular
one.
According
to
the
zip
distribution
file,
30
is
the
least
popular
one
and
they're
ordered
like
this,
and
now
the
algorithm
that
we're
running
the
caching
algorithm
we're
running
they.
It
doesn't
know
the
swift
distribution
beforehand.
I
It
actually
just
adaptively
measures
these
things
and
adapts
the
caching
pattern
adaptively
in
real
time
without
any
prior
knowledge,
and
so
here
what
we're
plotting
is
basically
the
cash
score
which
is
output
by
the
which
is
put
out
by
the
by
the
vip
algorithm,
and
you
can
see
that
actually
at
the
forwarder
node,
where
the
cache
is
the
cash
score
of
that
most
popular
file
is
actually
the
highest.
I
You
can
see
that
they're
ordered
in
the
right
way
and
if
you,
if
you
actually
see
what
is
actually
cached
at
the
forward
or
node,
you
see
that
it's
caching
exactly
what
it's
supposed
to
cache.
So
the
cache
is
capable
of
caching
five
files
and
it
caches
one
two,
three,
four
five,
so
this
is
exactly
what
it
caches.
After
you
know
some
stabilization.
We
run
it
for
a
few
minutes
and
now
what's
interesting.
Here
is
here.
I
We
plot
the
result
of
the
compare
the
performance
of
the
different
scenarios
here
on
the
x-axis
is
the
throughput
in
gigabit's
perplex
on
the
y-axis?
Is
the
delay?
Okay,
that's
actually
the
download
delay,
so
you
see
that
the
numbers
for
so
we
plotted
the
numbers
for
what
you
get
at
starlight,
where
you
get
a
ucla
in
the
overall
average,
and
the
the
square
ones
are
one
for
the
for
vip.
I
The
circles
are
for
the
arc,
which
is
a
version
of
lru
and
the
the
triangles
are
for
the
case
where
you
know
cache,
so
you
can
see
that
vip
is
actually
is
actually
getting
lower,
delay
and
also
more
throughput
in
general.
The
differences
between
arc
here
is
not
is
not
big,
but
in
the
next
experiment
you
will
see
the
box
actually
gets
bigger.
I
So
you
see
that
the
the
effect
of
caching
is
basically
to
increase
throughput
at
the
same
time
is
decreasing
delay
because,
essentially,
because
you
you
bring
it
closer
to
the
consumer
to
any
decreased
delay
that
way,
but
also
because
you
make
obviate
you
make
unnecessary
transmission
further
transmissions
of
the
request
upstream.
Therefore,
you
sort
of
reduce
congestion
in
the
network
and
you
further
reduce
delay
and
you
also
increase
throughput,
so
you
have
both
the
effects
here
now.
B
I
I
No,
this
is
dram
caching
yeah
great
thanks,
yep
right,
so
the
next
experiment.
Let
me
see-
oh
here,
yeah
yeah,
so
in
test
two
remember
in
test
two:
we
are.
We
have
two
two
nodes
right,
two
nodes
in
this,
so
two
caching
sites
and
in
this
case
at
the
first
one,
you
see
that
at
the
northeastern
one.
So
so
what
you
want
you
expect
here.
I
just
want
to
point
this
out
in
the
second
one
where
you
have
the
two
nodes
here.
I
You
would
expect
that
the
first
caching
point
should
cache
the
the
top
five
most
popular
files
and
then
the
next
five
should
be
cached
over
here
right.
That's
what
you
would
expect.
In
fact
that
is
ex.
That
is
pretty
much
what
happens
so
you
can
see
it.
The
first
forwarder
you
catch
the
first,
the
top
five
okay
in
the
next
forwarder
you,
you
essentially
catch
the
top
at
the
next
five,
but
some
minor
variations
here
now.
I
This,
I
think,
has
to
do
with
essentially
the
stochastic
variations
of
of
the
of
the
algorithm.
But
but
overall
you
see
that
it
is
the
it
is.
The
files
that
are
that
are
being
the
right
files
are
being
cached
here
and
and
of
course,
the
algorithm
is
doing
this
completely
adaptively.
It
doesn't
know
this
beforehand,
so
the
the
result
here
for
test
two,
you
can
see
again
vip-
is
doing
essentially
the
best
everywhere,
as
it
did
in
the
previous
case.
But
in
this
case
the
improvement
is
even
more
pronounced.
I
It
has
greater
throughput
in
lower
delay
than
compared
with
arc
and
to
certainly
compare
with
no
caching.
So
this
shows
very
clearly
that
if
you
are
clever
about
caching
and
by
the
way
we
have
actually
not
even
totally
exploited
the
full
power
of
vip,
because
vip
is
actually
a
joint
caching
and
forwarding
algorithm
so
because
here
in
a
topology,
we
really
don't
have
much
in
in
the
way
of
multi-path
capability,
so
we're
really
only
leveraging
the
power
of
caching
and
already
we're
getting
a
lot
of
improvement
in
the
performance.
I
So
the
next
step
for
us
is
really
to
test
out
these
topologies
over
the
land.
That
has
more
multicast
a
multi-path
capability
and
I
think,
we'll
get
even
better
results.
But
it
takes
time
to
do
that
and
we'll
report
that
in
in
the
future,
all
right.
So
let
me
see
what
I
can
say.
I
don't
know
how
much
time
we
still
have,
but
I
want
to
say
briefly
about
what
the
work
that
has
been
happening.
I
Congestion,
control
and
fpga
acceleration,
so
leisha's
group
at
ucla
has
been
working
on
congestion,
control
and
they've
actually
been
working
on
the
impact
of
caching
on
congestion
control.
So
there
are
some
interesting
examples.
They've
been
studying,
so,
for
instance,
in
this
particular
example,
you
have
two
consumers
c1
c2
that
are
fetching
the
same
object
from
the
producer
p
and
the
objects
are
segmented.
I
However,
c1
has
a
higher
capacity.
Higher
speed
connection
to
f1,
which
is
the
which
is
a
caching
point
and
c2
has
a
smaller
as
a
slower
connection.
So
the
expectation
is
that
c1,
the
c2
is
going
to
you
know
start
first,
okay,
c2
is
going
to
start
first,
and
so
you,
the
expectation,
is
that
c1
will
initially
be
satisfied
by
the
cache,
because
c2
pulls
the
the
the
object
first
and
caches
it
at
f1.
I
So
c1
will
initially
be
satisfied
by
the
cache
and
later
catch
up
with
c2
and
eventually
be
satisfied
by
the
producer
and,
however,
what
is
actually
observed
so
so
so
you
see
that
c1,
because
it
has
a
higher
bandwidth,
is
your
expectation
is
that
you
would
first
get
it
from
the
cache
and
then
later
catch
up
with
c2
and
be
satisfied
by
the
producer
right.
I
Instead,
it
will
be
continued
to
be
satisfied
by
the
cache
and
c2
will
be
soliciting
data
in
steady
state,
and
the
reason
is
that
so
so
here's
a
plot
of
what
you
expect
to
happen
in
terms
of
this
this
this
is
the
x
axis,
is
a
time
and
at
y
axis
is
the
delay
that
is
observed
by
c
one.
So
you
would
expect
that
it
first
gets
it
from
the
cache
to
f4,
f1
and
gradually
would
take
over
and
then
eventually
we'll
get
it
from
the
producer.
I
But
because
what
happens?
Is
that
so
the
scheme
that
I
think
alicia
is
working
on?
Is
you
know
the
congestion
is?
Is
the
control
is
based
on
looking
at
the
difference
between
the
interest,
sending
rate
and
the
data
receiving
rate,
so
that
c1
will
essentially
look
at
this
and
interp
and
interpret
this
difference
because
it
you
know
it's
getting
it
from
the
the
the
the
receiving
rate
is
essentially
lower
than
the
sending
rate.
So
it
interprets
this
as
a
congestion
signal
and
actually
decreases
its
request
rate.
I
So
the
the
observation
here
is
that
this
is
some
a
a
somewhat
unexpected
way
in
which
the
congestion
control
interacts
with
the
caching,
so
that
they're
still
exploring
this
and
they're
interested
in
looking
working
out
congestion
control
measurements,
which
must
be
resilient
to
these
rtt
variations.
I
So
what
they
have
done
so
far
is
they
have
developed
a
multi-path
interest,
forwarding
a
hub
they've
developed,
a
hop-by-hop
congestion
control
design
that
uses
queuing
delay
as
a
control
signal,
which
is
hot
by
hop.
It's
still,
I
don't
think
it's
published,
but
this
is
well
under
development
and
this
is
able
to
respond
to
different
kinds
of
bandwidth
limitations
upstream
and
it
uses
the
queuing
delay
as
a
signal
which
is
propagated
top
by
hub
to
from
downstream
to
control
the
the
sending
rate
to
the
request
rate.
I
Of
course,
then
the
question
is:
how
does
this
interact
with
caching
and
that
that's
the
question
that
they're
exploring
right
now
and
we'll
report
on
more
details
on
this
later?
All
right,
so
let
me
say
something
about
fpga
acceleration,
so
this
is
the
group
of
jason
kong
working
on
the
fpga
acceleration.
I
Fpgas
are
used
in
wide
variety
of
applications
in
machine
learning,
networking
in
ip
long
distance,
prefix,
matching
packet,
inspection
for
firewalls
and
so
and
so
forth.
Why
use
fpgas?
They
are
capable
of
pipelining
tasks.
Each
cycle
can
start
up
a
new
iplookup.
For
instance,
it
can
do
parallel.
Processing,
multiple
interfaces
of
its
own
have
its
own
processing
block
instead
of
sharing
one.
I
If
the
goal
is
to
use
fpga
in
the
forwarder
input
stage,
so
we're
talking
about
the
indian
dpdk
forwarder,
the
where
the
components
of
the
interest
name
have
to
be
hashed
and
there
has
to
be
a
table
lookup
to
to
see
which
interest
packet
should
be
dispatched
to
which
forwarding
thread
and
so
they're.
Essentially
you
have
to
do
the
the
the
prefix
hashing
and
you
have
to
do
the
table.
I
Lookup
and
those
are
both
computationally
expensive,
and
this
is
what
the
fpga
will
be
designed
to
do
and
we're
still
in
the
preliminary
stages
of
this.
But
there's
been
some
progress
so
again.
Looking
at
the
hashing
of
pre-physics
and
name
components
table
look
for
thread.
Dispatching
preliminary
results
show
a
four
times
improvement.
Four
time
improvement
over
doing
this
all
in
the
cpu
four
times
improvement
in
terms
of
the
throughput,
but
the
integration
with
ndpdk
is
still
ongoing,
and
so
there's
actually
a
multiple
stages
to
this.
I
First,
we
will
apply
the
fpga
acceleration
to
the
the
ndt,
which
is
the
dispatch
table
for
the
input
in
the
indian
dpdk
forwarder.
Then
we
will
apply
it
in
a
similar
way
to
the
combined
pit
cs
table
and,
finally
to
the
fib.
So
we
we
are
still
in
the
first
stage,
but
we're
the
there's
some
promising
results
of
of
having
some
throughput
improvement
improvement.
I
Okay,
so
I
think
I've
run
over
time.
So
to
give
you
some
summary,
I
mean
we're.
Of
course
it's
really
a
progress
report.
What
I
have
said
here
just
to
give
you
some
results
and
we're
still
working
hard
on
that,
but
I
think
we've
achieved
quite
a
bit.
I
mean
just
having
the
when
testbed
put
together
was
actually
a
major
achievement
as
far
as
we're
concerned.
I
We
have
managed
to
obtain
a
throughput
of
roughly
21
gigabits
per
second
over
this
testbed,
and
we've
shown
that
optimized
caching
forwarding
yields
significant
improvements
in
both
delay
and
throughput
over
this
testbed.
These
are
real
experiments.
I
We
have
been
developing
hub
by
hub
congestion
control
based
on
cueing
delay,
looking
at
interactions
with
caching,
fpga
acceleration
of
hashing
of
name
prefixes
and
table
lookup
for
a
thread.
Dispatching
shows
the
four
times
improvement
over
cpu,
but
we're
doing
this
in
cpu
and
we're
working
toward
the
prototype
production,
ready,
indian
system
and
we
are
seeking,
on
the
long
term,
collaboration
with
other
domain
sciences,
networking
computer
systems
communities
all
right.
Thank
you
very
much.
A
So
one
question
that
I
had
was
so
I
mean
this
is
obviously
you
know
a
lot
of
work
went
into
setting
up
this
testbed
and
really
accelerating
this
data
access
for
these
large
data
sets
and
so
on.
I
was
wondering
so
once
you
are
able
to
do
that.
Often
you
want
to
do
something
with
the
data,
so
it's
like
processing
the
data.
Is
that
also
in
scope
of
the
endless
pro
I
think.
I
Yeah
a
good
question,
so
it
was
actually
in
the
proposal.
There
was
some.
There
was
a
section
on
in
the
original
proposal.
It
was
proposed
that
we
would
also
look
at
joint
data
movement
and
computation
scheduling
of
these
workflows,
but
due
to
the
then,
then
the
budget
was
cut
a
little
bit,
so
we
removed
that
from
the
scope
of
the
project.
For
now.
I
Obviously,
that's
that's
the
goal
of
the
ultimate
goal
of
hopefully,
a
continuation
of
this
project
is
to
not
only
deal
with
the
data
movement,
which
is
what
we're
doing
right
now,
but
to
handle
the
workflows
and
that's
something-
that's
definitely
very
very
relevant,
and
this
is
the
whole
problem
in
high
energy
physics,
application
and
many
of
these
other
applications.
So
I
think,
there's
still
a
lot
of
work.
That
remains
to
be
done,
but
I
think
that
the
data
movement
is
definitely
the
first
step.
Okay,
yeah
cool,
so
we
have
thomas.
K
Hello,
edmund
thanks
for
this
very
interesting
talk,
a
question
on
your
caching
analysis.
I
mean
you're.
K
High
energy
physics
or
other
huge
bulk
scientific
data-
and
you
were
you-
were
assuming
a
zip
distribution.
I
was
a
bit
surprised
about
this.
I
mean
I
would
expect
zip
with
netflix,
but
but
is
there
a
use
pattern
that
actually
distinguishes
between
certain
certain
portions
of
the
data
in
high
energy
physics.
I
So
we
actually
took
statistics
using
actual
request
data
from
the
lhc
community
and
in
fact
there
is
a
similar
falloff.
You
know
that
you
would
expect
in
I
mean
usually
you
you
have
data
sets
which
are
so
called
hot
data
sets,
which
everybody
wants
to
work
on.
You
know,
so
there
is
actually
a
falloff
in
the
in
the
popularity
and
we
fit
actually
the
the
distribution
with
this
this
one.
I
It's
it's
a
good
approximation,
so
this
is
actually
based
on
data
that
we
collected
is
that
is
that
your
question,
or
is
it.
I
Yes,
it
does
thanks
yeah
yeah,
so
so
it's
it's
a
similar
enviro
phenomena
actually
is
is
true.
In
in
physics,
what
happens
is
that
you
know
there's
there's
typically
some
big
questions
that
everybody
is
trying
to
explore
right
and
that
requires
certain
data
sets
which
are
maybe
the
you
know
the
most
recent
one
or
some
data
set,
which
is
really
relevant
to
the
task
at
hand
and
then
there's,
of
course,
a
tail
and
yeah.
J
Thanks
edmund,
you
may,
as
you
know,
may
know
we're
we're
trying
to
do
some
similar
we're
trying
to
build
on
some
of
your
sandy
stuff
actually
and
you've,
probably
seen
from
my
master's
student.
But
my
question
is
when
you
in
the
three
hop
thing
where
you
showed
the
the
architecture
of
the
consumer
and
the
producer.
J
I
I
so
questions:
how
do
we
chunk
so
you
know.
I
Yeah
I'm
trying
to
find
this,
so
I
I
think
it's
okay,
I'm!
So
I
think
these
are
so
I'm
not
sure
if
my
student
is
online
right
now,
but
maybe
he
can
give
you
better
details
on
this.
I'm
not
sure
I
think
yeah
he's
there.
I
Yeah,
so
I
think
that
chunking
is
essentially
is
is,
is
something
that
is
essentially
inherent
in
the
application.
So
I
don't
know
if
you
and
how
are
you
able
to
speak
quickly
or
no?
I
guess
he
has
to
ask
to
be
to
speak
or
something
right.
I
E
I
L
Yeah,
the
chunking
is
implemented
directly
embedded
into
the
file
server
and
the
consumer
application
the
how
to
channel
the
chunk,
the
content
and
yeah
so.
L
It's
basically
where
to
do
all
the
things.
Oh
yes,.
J
J
B
Well,
I'm
going
to
abuse
my
q
location
asked
two
questions.
Hopefully
they're,
both
real
easy
ones,
number
one
and
you're
21
gigabits
per
second.
Are
you
doing
signature
validation.
B
Right
right,
so
you're
not
you're,
not
actually
exploiting
the
security
properties
of
indian.
I
So
what
happens?
Is
this
in
the?
In
the
I
mean,
that's
a
good
question
in
the
high
energy
physics
application.
I
mean
that's
something
we
will
be
doing.
I
think
that's
the
the
the
the
sort
of
the
providence
aspect
is
a
part
of
the
project,
but
right
now
we're
not
doing
that
and
in
fact,
the
physics
application.
I
What
happens
is
that
you
have
a
sort
of
a
pretty
strict
security
parameter
for
people
who
can
access
the
system
you
have
to
be
approved
and
so
on
and
so
forth.
So
they
don't
actually
usually
worry
about
security
too
much
within
this
physics
application,
but
nevertheless
we
as
a
part
of
the
project
we
did
propose
to
to
put
to
put
the
signatu
you
know
to
to
sign
the
packets
and
to
verify
them.
You
know
to
provide
providence,
so
we're
not
haven't
done
that
yet.
B
Okay
and
then
one
final,
quick
question,
which
is
4x
out
of
improvement
over
cpu
for
an
fpga
seems
not
very
impressive.
Do
you
have
any
insight
into
what
the
bottleneck
is.
I
Yeah
yeah
you're
right.
I
think
it's
not.
I
think
that
these
results
are
still
preliminary.
Tell
the
truth.
I
think
jason's
group
is
still
working
on
that.
B
Working
on
some
stuff,
that's
going
to
be
bottlenecked
on
on
fpga
style,
hashing
of
of
these
load
distribution
to
courses.
So
I
think
it
would
it's
really
that
works
of
really
broad
interest
and
publishing
some
stuff
on.
That
would
be
really
great.
I
Well,
you
know
I
I
absolutely
I
don't
want
to
say
anything
definitive
here,
because
jason's
group
is
really
the
expert
on
that.
I
we
can
take
this
offline
and
we
can.
I
can
probably
give
you
more
insight
talking
with
their
group
about,
but
it's
ongoing
work.
I
I
did
you
shouldn't
see
this
as
finalized
at
a
bio
by
any
means.
B
Network
coding,
the
same
way
was
that
if
you
can't
exploit
the
high
parallelism
by
doing
you,
know
20
50
70
packets,
at
a
time
in
the
fpga
you're.
Just
not
you
know,
because
the
clock
rate,
the
inherent
clock
rate
of
the
fpga,
isn't
just
all
that
high
right.
You
really
need
to
be
able
to
paralyze
things.
I
Absolutely
so
so
I
I
think
you're
right.
I
think
this
is
not
that
impressive,
yet
and
but
the
the
fpga
work,
I
I
have
to
say
it.
It
is
ramping
up.
It's
ramping
up.
The
first
part
of
the
project
was
much
more
focused
on
these
other
things
that
I
talked
about
the
caching
and
setting
up
the
win
test
bet,
but
I
think,
during
the
next
couple
of
months
we
should
be
seeing
much
more
ramping
up
of
the
work
on
fpga
and
we'll
keep
you
updated
great
thanks.
A
Okay
thanks,
so
I'm
sure
people
have
more
questions
so
have
I
but
yeah,
let's
defer
this
to
the
main
list.
I'm
sure
edmund
would
be
happy
to
answer
your
questions
thanks
again,
edmund
for
being
very
much
all
the
time
yeah
for
the
opportunity
hope
to
see
you
again
soon.
A
Okay,
so
let's
move
on
and
next
would
be-
I'm
gonna
one
on
data
time
encoding
for
ccnx.
N
Yes,
thanks
to
I
dirk
ide
for
vienna,
I
think
you
can
hear
me
right.
N
All
right
yeah!
Thank
you
thanks
again
for
this,
for
this
five-minute
slot
that
we
that
you
provided
us.
So
this
graph
is
about
a
compact
time,
representation
for
ccnx.
It
was
actually
first
presented
in
2019,
I
think
in
singapore
by
thomas
thomasod
and
then
in
2020,
virtually
by
me
so
far,
it
hasn't
received
that
much
attention
on
the
mailing
list
and
that's
why
I
wanted
to
also
give
like
a
brief
recap
on
the
core
ideas
of
this
of
this
draft.
N
So
the
resource
constraint
environments
we
have
usually
like
a
low
bandwidth
and
high
latency
access
to
to
wireless
links
is
typically
slower
than
the
packet
processing
within
the
stack
itself
and
the
packet
transmission
is
usually
the
dominating
factor
for
the
energy
consumption.
Of
these
little
tiny
devices
and
header
compression
is
actually
a
solution
to
to
reduce
the
energy
expenditure,
but
it
also
improves
transmission
reliability,
which
we
also
measured
and
showed
during
our
work
on
icn
open,
which
is
now
rfc,
91
39..
N
So
ccnx
is
making
use
of
two
different
types
of
representing
time.
First,
they
have
a
relative
type
relative
time
which
is
used
in
the
interest
for
the
interest
lifetime
and
there's
also
absolute
time,
which
is
used
for
the
signature
time
expiry
time
and
the
required
cache
time.
In
both
message
types.
N
N
And
in
this
particular
draft
we
are
like,
I
mean
the
idea
got
in
got
inspired
by
by
rfc
5497,
which
is
a
time
tlv
rfc
for
or
yes,
it's
a
time
present
or
like
a
dynamic
range
encoding
for
money,
routing
style
protocols,
and
this
is
all
very
similar
to
the
work
in
ieee
754,
which
is
the
floating
point
standard.
N
N
Calculate
a
time
value
which
is
in
seconds
then
from
this
one,
single
byte,
the
subnormal
form
of
this
formula
is
needed
to
solve
actually
the
the
underflow
issue,
because
if
you
only
go
with
the
normalized
formula,
then
you
have
a
huge
jump
between
zero
and
the
smallest
number
you
can
represent.
N
So
in
the
last
presentation
in
2020,
we
actually
presented
like
multiple
configurations
for
this
single
byte,
because
you
can
play
around
with
the
mantis
and
the
exponent
sizes
and
come
to
like
different
ranges
and
so
also
on
the
mailing
list.
We
had
this
discussion
two
years
back
and
I
think
this
is
the
configuration
that
we
converge
to,
and
this
is
also
now
in
the
draft,
and
this
is
configuration
we
can
have
numbers
that
range
from
from
zero.
N
All
right,
so,
if
there
are
any
questions,
then
please
raise
the
hand
so
from
here
on.
There
was
this
huge
question
also
two
years
back
on
how
actually
we
can
integrate
this
compact
time
in
the
current
ccnx
protocol
specification,
and
there
were
like
multiple
solutions
to
that.
N
Is
that
if
we,
for
example,
look
at
the
interest
lifetime,
we
say
that
if
we
set
the
length
to
one
then
the
forwarder
has
to
interpret
this
number
as
the
compact
delta
time,
and
if
it's
like
any
number
greater
one,
then
it's
the
typical
ccnx
time
delta
in
milliseconds
for
the
recommended
cache
time
is
a
little
bit
different.
I
said
before
that's
fixed
to
eight,
so
it's
not
actually
using
other
other
sizes
there.
So
what
we
said
in
the
in
the
draft
is
okay.
N
If
we
use
a
length
of
one,
then
we
put
the
compact
time
in
there,
but
you
probably
saw
it's
a
delta
time.
So
it's
a
time
offset
and
not
an
absolute
time
anymore,
but
we
think
that
the
regular
cache
time
is
not
so
critical.
If
you
lose
a
little
bit
of
precision
there,
so
the
forwarder
must
be
able
to
convert
the
the
time
offset
to
absolute
time
on
return
on
transmission
and
receiving
the
packets.
N
So
there's
obviously
well.
The
advantage
of
this
integration
level
is
that
we
don't
need
to
allocate
a
new
tlv
number
at
the
honor,
but
the
disadvantage
is
that
this
requires
an
update
to
to
the
our
ccnx
messages
rfc
and
that,
if
the,
if
there's
a
forwarder
that
doesn't
know
about
this
change,
then
of
course
the
time
is
like
misinterpreted.
N
N
So
an
alternative
integration
would
be
to
actually
define
a
new
top
level
tlv
so
to
allocate
a
number
at
ayana,
and
then
we
would
have
something
like
interest
lifetime
compact-
and
this
was
this-
would
live
like
next
to
the
interest,
the
typical
interest
lifetime
and,
of
course
the
advantage
is
that
if
the
forwarder
doesn't
know
this
the
tlv,
it
probably
just
ignores
it
and
takes
a
default
interest
lifetime
and
yeah
the
disadvantages
we
allocated
number-
and
this
brings
us
to
the
end.
So
this
yeah
within.
F
N
According
to
the
formulas
and
also
we
added
terminology
and
enlargement
sections
yeah,
you
probably
saw
we
need
more
feedback
also
on
the
mailing
list,
especially
for
the
protocol
integration,
so
how
to
integrate
these
compact
times
into
ccnx
and
also
there's
the
expiry
and
signature
time,
which
is
currently
absolute
time,
and
there
were
some
ideas
floating
around
a
couple
of
years
back
whether
we
can
generate
offset
time
offsets
of
for
these
tlvs.
The
problem
is
that
they
are
within
the
security
envelope.
N
So
and
then
is
the
question
I
mean
if
there's
interest
in
the
group
for
this
particular
work,
then
would
this
be
ready
for
rg
adoption.
A
All
right
thanks
thank
for
progressing
this
and
for
bringing
this
back
yeah.
So
I
mean
we
think
this
is
the
right
type
of
document
for
a
like
research
group
activity
is
it's
kind
of
a
it's
a
useful
feature,
especially
in
these
high
delay
environments,
and
it's
also
it's
raising
interesting
questions
on
protocol
extensibility,
so
like
from
a
shares
perspective.
I
think
we
would
propose
adopting
this.
A
Okay,
because
that's
what
we
need,
of
course,
as
a
next
step,
then
make
a
decision
in
the
group
by
how
we
want
to
go
about
integrating
this,
especially
and
yeah.
So
let's
discuss
this
on
on
the
list,
then
all
right
thanks
again,
yep
thanks
all
right.
A
So
so
we
are
moving
on
with
the
updates
on
ping
and
traceroute.
So
spiros
can't
make
it
today.
So
dave
is
going
to
present
these
things.
B
So
hi
spiros
couldn't
make
it
he
did
the
slide,
so
I'm
just
going
to
quickly
go
through
things
on
ping
and
traceroute,
so
just
to
remind
people.
These
are.
B
These
are
instrumentation
and
management
tools
for
nbn
and
ccn
style,
icn
protocols
and
they're
analogous
to
the
similar
capabilities
we
have
in
ping
and
trace
route
in
the
ip
world,
although
because
the
architecture
of
the
underlying
protocols
is
different,
the
capabilities
of
these
these
protocols
are
similarly
somewhat
different
in
terms
of
how
they
support
multipath
and
how
they
can
support
the
existence
of
caching
in
intermediate
nodes.
B
So
ping
gives
you
the
reachability
of
names
both
in
from
producers
and
on
path
caches,
and
we
have
again
packet
formats
for
both
of
our
popular
ndn
underlying
protocols
ditto
for
trace
route.
This
is
a
multi-path
capable
trace
route,
just
as
we've
seen
in
the
ip
world
with
with
tunnel
trace
types
of
things,
but
this
is
now
built
in
on
day
one,
and
similarly,
we
have
protocol
encodings
for
both
our
popular
icn
protocols.
B
So
the
current
status
is
that
these
drafts
have
been
around
for
quite
a
while
they've
been
implemented
in
a
number
of
experimental
settings.
We
completed
last
call
in
january,
didn't
get
a
lot
of
feedback,
but
we
got
some
very
good
set
of
comments
from
june
child.
B
She
on
the
draft
and
both
have
been
since
updated
by
the
authors,
so
we
had
one
issue
that
came
up
on
last
call
which
had
to
do
with
just
how
to
integrate
path,
steering
which
is
used
in
the
multi-path
case
for
both
ping
and
trace
route
into
the
into
the
encoding
method.
Methodology
of
this
and
decided,
based
on
some
discussion
that
it
belongs
in
the
base
protocol,
as
opposed
to
some
intermediate
link
mapping
protocol,
because
although
it's
not
an
end-to-end
capability,
it
is
modified
hop
by
hop.
B
B
The
tlv
that
carries
path
steering
out
saw
after
the
signature
tlv
to
make
sure
that
it's
excluded
from
the
security
envelope,
since
it
is
modified,
hop
by
hop
and
made
use
of
the
new
capabilities
in
ndn
to
have
types
name,
components
and
then
didn't
have
that
until
fairly
recently,
so
we
use
type
name
components
both
in
the
ccnx
and
the
ndn
variants
of
the
protocols
to
indicate
that
ping
and
trace
route
require
special
processing
by
intermediate
nodes.
B
So
the
next
step
is
basically
the
authors.
Think
the
drafts
are
ready
for
irsg
review
and
dirk
has
been
is
going
to
have
to
be
the
documents
that
shepard
since
your
other
co-chair
as
a
co-author
of
the
draft,
and
I'm
done
and
I'll
take
questions.
B
A
Yeah,
I
will
take
care
of
it
if
there
are
no
questions.
Let's
move
on
with
the
past
doing
oh.
B
One
quick
thing
just
for
colin
I'll
confirm
this
with
the
mail
to
you,
but
these
two
documents
are
are
essentially
a
pair,
so
the
same
irsg
person
that
will
probably
want
to
review
both
since
they're.
Just
two
two
piece
of
the
pod.
A
Yes,
so
the
next
one
is
a
refresher
on
also
work
that
has
been
done
earlier
and
that's
path,
steering
so
an
interesting
capability
that
we
can
introduce
to
icn
so
make
path,
selection
available
to
consumers,
and
so
we
think
it's
it's
quite
relevant
for,
like
the
multi-pass
discussions
that
keep
coming
up
and
like
also
deficiencies
that
other
protocols
have
in
this
direction.
And
so
that's
why
we
thought
it's
a
good
idea
to
look
at
this
again
and
make
people
aware
so
yeah
dave
go
ahead.
B
Is
everybody
else
seeing
the
messed
up
conversion
slide
on
this
stuff?
Yes,.
B
Well,
we
do
have
a
capability
of
encrypting
the
path
labels,
so
maybe
that's
just
the
encrypted
version,
it's
supposed
to
say,
path,
steering
or
refresher
all
right,
so
this
has
been
around
for
a
while,
and
we
haven't
had
a
lot
of
strong
motivation
to
move
it
forward
quickly
until
recently
and
the
reason
the
motivation
has
gone
up,
of
course
is
because
we're
progressing
ping
and
trace
route,
both
of
which
need
this
in
order
to
be
maximally
useful
as
instrumentation
tools
in
a
multi-path
forwarding
environment,
because
you
want
to
be
able
to
make
sure
that,
when
you're
measuring,
for
example,
rtts
that
when
there's
a
multi-path
environment,
you
get
individual
rtts
for
each
of
the
sub-pads
that
might
be
traversed
and
similarly
for
trace
route.
B
You
want
to
be
able
to
explore
for
multiple
paths
to
destinations
and
be
able
to
report
on
them,
so
I'll
quickly
go
through
this.
A
lot
of
this
is
material
you
may
have
seen
before,
but
we
haven't
really
talked
about
it
in
a
while.
So
it's
probably
worth
taking
a
little
bit
of
time
to
go
through
it
again.
B
B
B
So
we
need
the
ability
for
troubleshooting
and
performance
tools
to
get
this
path,
visibility
in
order
to
find
problems
and
do
simple
measurements,
and
we.
B
The
same
thing
in
the
ip
world
when
we
tried
to
do
things
like
mpls
and
tunnel
trace
where
there
are
multiple
underlying
pads
and
and
hence
a
lot
of
work,
was
done
sort
of
like
late
in
the
evolution
of
the
ip
protocol.
Suite
to
add
these
capabilities
and
as
a
result,
it
was
quite
substantially
messy.
So
we're
hoping
that
by
incorporating
these
capabilities
early
in
the
evolution
of
the
icn
architectures
that
we'll
be
able
to
do
a
much
cleaner
job.
B
So
we
have
two
that
have
been
published
over
the
last
couple
of
years:
one
called
merck,
a
multi-path
rate-based
congestion,
control,
algorithm
and
smek,
a
sub-path
window-based,
multi-path
congestion
control
algorithm,
both
of
which
would
be
able
to
properly
exploit
an
explicit
path.
Steering
capability.
B
So
the
design
of
this
actually
goes
back
to
a
paper
published
around
19
2018
on
path,
steering
and
also
path,
switching
which
does
optimized
forwarding
pads
using
past
steering.
So
the
design
question
is:
how
do
you
label
the
paths
and
over
the
last
few
years,
we've
looked
at
a
number
of
possibilities.
B
B
We
looked
at
label
stacks,
similar
to
mpos
label
stacking,
which
has
some
nice
history
behind
it,
they're
known
to
work
reasonably
well,
but
require
being
able
to
vary
the
size
of
the
the
data
structure.
As
you
traverse
the
network,
with
the
push
and
pop
operations,
we
chose
to
use
fixed
size
labels
simply
because
we
expect
path
length
to
be
not
un
unreasonably
long
and
the
processing
of
these
to
be
being
really
really
fast.
B
So
the
second
issue
is:
how
do
you
discover
paths
and
how
do
you
steer
packets
on
the
paths
that
are
discovered?
So
one
of
the
nice
properties
that
makes
this
way
easier
than
in
the
ip
world
is
that
we
have
symmetric,
forwarding,
symmetric
routing
so
that
returned
packets
returning
or
returning
over
the
same
path
that
the
interest
is
is
forwarded
over.
So
the
interest
contains
a
path
label
marked
in
discovery.
B
It's
forwarded
via
the
least
named
prevex
match
in
the
fib,
and
the
content
in
a
data
message
carries
the
path
label.
That's
been
computed
on
the
way
up
back
towards
the
source
of
the
packet
and
then,
as
a
subsequent
interest,
can
take
this
path
label
that
was
obtained
from
an
earlier
return.
Data
packet
not
mark
discovery
mode
and
forwarded
via
the
the
fib,
and
this
explicit
next
hop
selection.
So
we
don't
bypass
fib
look
up
here.
B
I
say
that
because
the
original
paper
also
had
this
optimization
of
being
able
to
align
the
path,
the
the
actual
fib
look
up,
so
we
can
reliably
measure
path.
Rtts
we
can
iteratively
discover
multiple
network
pads
congestion
control
can
discover
and
distribute
load
across
pads
and
although
this
hasn't
been
proven
in
anything
implemented.
Yet,
if
you
believe
you
might
be
getting
a
content,
poisoning
attack
across
one
of
these
pads
and
you
have
multiple
paths
that
could
bypass
a
poison
cache.
B
B
Route
distribution,
so
some
of
the
interesting
issues
are
how
you
deal
with
route
updates,
so
you
have
to
invalidate
pads
that
have
been
discovered.
So
we
have
a
an
interest
return
in
ccnx
style
mac
for
carrying
an
invalid
path
label
which
can
invalidate
it.
You
could
silently
forward
the
interest
through
any
available
next
top
if
you
don't
get
a
match
and
we
can
actually
control
this
behavior
by
either
forcing
the
error
or
allowing
you
to
fall
back
or
redo
discovery.
B
B
B
Labels
there's
a
next
hop
header
that
carries
the
path
label
and
the
sub
tlvs
that
go
with
that
in
order
to
complete
the
encoding
on
ndn
ndn
hasn't
adopted
this,
but
we
have
a
proposal
in
the
spec
for
how
one
would
how
much
we
believe
should
integrate
this
into
the
ndn
packet
encoding,
which
is
to
have
a
new
packet
tlv
called
path
label
with
essentially
the
same
semantics
as
we've
defined
for
ccnx
there's
just
a
picture
of
what
it
looks
like
now.
There's
some
security
considerations
here,
they're
all
in
the
document.
B
You
can
look
at
them.
Clearly,
consumers
could
do
probing
and
maliciously
mister
packets,
but
you
in
order
to
be
able
to
guess
a
correct,
next
hop
on
a
different
path.
B
You
have
to
sort
of
send
about
two
to
the
12th
interests
for
for
this
to
work,
so
we
have
a
number
of
mitigations
whoops
in
the
spec.
In
order
to
deal
with
that-
and
I
just
unshared
my
slides,
let
me
put
them
back
up
again.
B
So
a
second
possibility
is
cash
pollution
by
consumers
and
producers
colluding
to
inject
off
path
and
bogus
object,
so
cache
entries
have
to
be
annotated
once
you
add
this
capability
with
the
corresponding
path
label
and
only
use
that
cache
entry
to
satisfy
interest
with
a
matching
path
label
and
that
cache
entries
should
not
evict
entries
for
the
same
object
with
no
path
label
or
a
different
path
label
to
to
mitigate
this
potential
cash
pollution
attack,
and
I'm
done
so
the
I
think
the
issue
at
hand
which
we'll
talk
about
when
we're
done.
B
The
rest
of
the
the
agenda
is,
whether
is
perhaps
we
should
do
an
rg
adoption
call
for
this
work
at
this
point.
So
thanks
very
much.
B
J
Sorry
I
forgot
to
respond.
Okay,
I'm
gonna
say
the
words
you
should
never
say
at
a
itf
meeting,
but
I
have
a
question,
but
I
haven't
read
the
draft
recently.
So
what
is
it
that
the
could
you
put
the
the
picture
of
the
of
the
encoding
up
again,
because
I
didn't
quite
digest
that?
Okay,
so
it's
a
12-bit
next
hop
label
and
the
idea
is
that
it's
completely
up
to
the
next
hop.
What
the
what
to
put
in
there.
It
can
be
a
random,
correct,
okay,
correct.
B
B
The
intent
is
that,
since
you
want
to
be
able
to
use
this
for
diagnostic
purposes
as
much
as
possible,
the
intent
is
that
it
would
be,
it
would
be.
Every
hop
would
be
identified.
B
Oh
and
please
read
the
draft,
we're
interested
in
your
comments.
L
A
Yeah,
so
I
mean
we
think
this
is
a
quite
a
powerful
tool
that
gives
consumers
many
ways
to
influence
the
way
they
want
to
work
with
the
network
and
it's
designed
in
a
way
that
is
using
soft
state
and
so
on.
So
it's
something
that
other
protocols
cannot
really
do,
and
so
that's
why
I
think
it's
it's
a
really
nice
also
illustration
of
how
icn
could
work
or
could
be
leveraged.
B
A
Okay,
so
I'm
going
to
the
next
one,
so
one
of
the
reasons
we
put
up
past
steering
on
the
agenda
was
actually
that.
So
when
dave-
and
I
were
discussing
how
to
redesign
reflexive
forwarding
parsley
was
one
of
the
candidates,
but
in
the
end
we
decided
not
to
use
it.
A
So
this
is
about
reflexive
forwarding
for
ccnax
and
ndn,
and
so
let
me
just
talk
a
bit
about
the
motivation,
so
I
mean
there
are
many
demonstrated
scenarios
where
you
know.
I
see
an
interest.
Data
is
just
fine,
very
useful.
So,
like
admins,
you
know
data
science
environments,
but
also
iot
and
multimedia
streaming
and
so
on,
but
there
are
also
other
scenarios
where
it's
not
quite
sufficient.
So
when
you
would
think
about
how
would
we,
for
example,
do
web
over
icn
so
something
like
restful
communication?
A
A
So
often
in
this
scenario,
you
need
something
like
the
ability
to
push
data
to
somewhere
and
or
you
want
to
have
it
like.
It's
like
a
sequence
of
say,
interactions
with
like
restful
semantics,
and
so
you
need
to
establish
some
state
and
pass
some
parameters
for
every
request
and
so
on,
and
so
these
are,
of
course,
relevant
use
cases,
and
so
I
mean
in
the
past,
research
has
tried
to
somehow
you
know
realize
them
sometimes
with
say.
A
Maybe
if
I
can
say
so,
maybe
like
some
some
hacky
approaches,
and
so
the
goal
for
our
work
here
was
to
enable
these
scenarios
in
a
way
that
doesn't
completely
contradict,
say
all
the
icn
paradigms
that
we
enjoy
so
much
so
having
no
source
addresses
flow
balance
and
so
on.
So
we
we
think
that
so
this
scheme
here
could
be
a
foundation
for
these
scenarios
and
possibly
others
as
well
and
so
restful
icn.
A
I
think
that's
something
that
would
be
needed
in,
say
anything
that
has
to
do
with
web
in
the
future,
for
example.
So
what's
the
problem
for
for
these
kinds
of
interactions
so
take
web?
For
example,
when
you,
when
you
do
a
restful
communication,
you
set
up
a
connection
to
some
server.
A
You
often
transfer
many
input
parameters,
authentication
authorization,
some
kind
of
tokens
and
so
on,
and
quite
often
you
you
do
this
in
with
every
request,
so
something
like
cookies
and
so
on,
and
so,
if
you
look
into
how
like
web
requests,
look
like
today,
so
the
the
header
fields,
and
sometimes
also
the
the
size
of
the
body,
is
really
quite
large,
and
so
you
wouldn't
put
this
into
an
interest
if
you
can
avoid
it,
so
that
wouldn't
really
make
sense
and
another
example
is
remote
method,
indication
so
for
distributed
computing.
A
Another
use
case
where
I
see
n
gets
more
relevant,
so
you
actually
think
about
how
would
I
include
my
authentication
information.
How
would
I
include
like
potentially
large
data
sets
that
are
needed
for
some
kind
of
computation
on
the
server
side
and
so
on,
and
then
yeah
you
you
so
the
way
that
you
would
typically
do
this
is
enable
some
multi-way
handshakes
so,
for
example,
for
rmi
you
would
maybe
like
want
to
fetch
the
arguments.
A
Somehow,
if
you
want
to
do
it
in
an
icn
compatible
way,
and
maybe
then
perform
some
authorization
and
then
other
scenarios,
where
you
could
utilize
multi-way
handshakes
when
you
do
iot
and
you
have
like
phone
home
scenarios,
so
sometimes
you
like
have
something
to
tell
to
your
home
base,
but
yeah
you,
you
don't
want.
Like
the
say
we
say,
cloud-based
server,
to
pull
you
all
the
time,
but
you
want
to
notify
the
server
and
then
you
want
to
serve
so
to
fetch
the
data
from
you.
A
So
there
you
would
utilize
multi-way
handshakes
or
for
any
type
of
peer
state
synchronization,
where
you
need
at
least
three
handwag
handshakes
to
do
this
reliably,
and
so
the
question
is
okay.
How?
How
would
what
we
deal
with
all
these
scenarios?
In
icn
and
okay?
We
have
seen
some
say
proposals
where
people
just
stuff
in
additional
parameters
into
interest
messages.
A
A
Message
sizes
also
well,
if
you
think
about
these
rmi
scenario,
so
like
client,
server
communication,
if
you
just
you,
know,
push
a
lot
of
data
to
the
server
and
that
the
server
needs
to
analyze
to
make
a
decision
whether
this
is
a
valid
request
or
not.
Well,
this
opens
the
door
for
say,
well-known
computational
overload
attacks,
and
you
wouldn't
want
to
do
this.
You
want
to
give
the
server
say
more
control
in
what
he
wants
to
accept
and
and
what
not.
A
So
the
issue
with
that
is
so
suddenly
you
would
require
consumers
to
reveal
something
like
a
source
address
or
a
source
name
which
well
normally,
we
wouldn't
have
to
do
in
icn,
and
I
think
we
it's
quite
a
desirable
feature
for
many
reasons.
So
you
could
say.
Maybe
you
do
anonymity,
but
also
consumer
mobility
becomes
more
difficult
right
when
and
when
we
don't
have
source
addresses.
A
It's
very
easy
to
move
the
consumer,
but
like
having
a
routable
address
that
our
name,
that
the
server
needs
to
know
makes
this
way
harder,
and
so
it
also
would
probably
result
in
quite
complicated
state
machines.
If
you
have
to
deal
with
like
multiple
exchanges
in
parallel
and
so
on,
and
so
which
reflects
the
forwarding.
A
We
try
to
overcome
these
issues
and
basically
make
it
possible
for
a
producer
or
a
server
to
to
ask
for
additional
data
that
it
needs
to
perform
some
transaction
perhaps,
but
we
don't
want
to
do
it
in
a
way
that
the
suddenly
the
server
has
to
know
some
kind
of
routable
prefix
or
like
stable
name
for
the
for
the
consumer,
and
so
what
we
want
to
do
is
basically
allow
the
server
to
send
a
what
we
call
reflexive
interest
back
to
the
consumer,
leveraging
the
state
in
the
forwarding
system
that
an
initial
interest
created.
A
So
you
send
an
interest
to
a
server.
Somehow
this
allows
the
server
to
implicitly
get
back
to
you,
but
it
doesn't
need
to
have
a
routable
and
stable
source
address
for
the
for
the
consumer
and
also
we
want
to
couple
the
state
that
is
needed,
for
these
say
interim
interests,
data
exchanges
in
the
opposite
direction
to
the
overall
state
of
the
general
interest,
data
exchange
and
yeah.
This
is
how
it
maybe
it's
better
to
explain
it
with
a
picture.
A
This
is
how
you
could
ex
explain
the
general
operation,
so
you,
a
consumer,
would
send
an
initial
interest,
and
so
this
has
the
like
a
usual
the
usual
icn
name,
to
identify
the
say,
named
object
on
a
server
but
has
additional
information,
so
a
reflexive
name
prefix
that
we
then
use
to
get
back
to
the
consumer.
A
So,
let's
just
assume
that
this
interest
arrives
at
the
producer,
and
so
it
enables
the
producer
to
maybe
check
some
initial
parameters
and
then
decide
whether
it
wants
to
continue
with
this
interaction
and
fetch
additional
data,
and
this
could
be
several
interactions,
not
only
one
and
then
make
maybe
step
boys
decisions,
how
to
move
on
what
else
is
needed
and
so
on.
So
rnp
is
this
reflexive
name
prefix
that
we
want
to
communicate?
A
K
Such
a
such
a
request
with
a
reflexive
name
prefix,
which
means
it
looks
for
me
to
me,
like
a
very,
very
nice
dos
attack,
vector,
isn't
it
so
if
you
have
a,
if
you
have
a
botnet,
you
send
for
many
places,
you
send
this
rnp
x1
and
r
and
x1
is
actually
your
your
victim
and
then
your
your.
A
A
That's
something
that
you
we
have
to
avoid
of
course.
So
maybe
let
me
continue
with
the
explanation
and
then
I
think
it
becomes
clear.
So
in
the
previous
version
that
we
talked
about
some
meetings
back,
the
system
worked
in
a
way
that
we
actually
expected
forwarders
to
install
like
a
fip
entry
in
a
say,
separate
database.
A
That
would
then
later
allow
them
to
forward
these
reflexive
interest
back
to
the
consumer,
and
so
this
kind
of
works.
You
can
do
it
this
way,
and
so
it's
the
the
say,
forwarding
information
here
is
still
bound
to
this
say:
outer
interest
data
exchange
and
so
when,
for
example,
the
like
d1
data
object
gets
sent
back
then
the
forwarders
would
also
remove
the
the
this
reflexive
fib
information.
A
But
of
course
it
has
the
disadvantage
that
you
have
to
maintain
this
extra
data
structure
and
manipulate
the
the
forward
information
base
and
so
on,
and
that
did
so.
We
we
thought
about.
Okay.
How
could
we
maybe
make
this
a
bit
more
elegant
a
bit
more
if
it's
maybe
easier
to
implement
as
well,
and
so
the
current
version
was
actually
inspired
by
the
pit.
A
So
in
high
speed,
four
borders,
you
have
additional
interesting
challenges,
so
you
often
have
charlotte
pits,
because
you
have
multi
multi-core
forwarding
and
you
want
to
give
the
forwarders
an
efficient
way
to
map
and
an
incoming
data
object
to
the
like
to
the
correct
picked
instance
in
a
charted
system,
and
so
we
thought,
okay,
pit
tokens
which
has
been
used
for
that
is
probably
a
useful
feature
anyway.
A
So
for
high-speed
forwarding
it's
it's
needed
for
these
multi-core
sharded
pit
systems
and
let's
leverage
that,
and
so
what
we
have
done
here
is
so
we
defined
like
two
two
tokens,
so
one
in
the
forward
direction
that
we
say
use
in
the
initial
interest.
We
call
forward
direction,
pit
token
fpt
and
then
another
one
for
the
reverse
direction.
That
is
then
yeah
kind
of
leveraging
the
this
pit
state
that
gets
established
in
the
in
the
forwarding
direction.
First.
So
maybe
let's
look
at
the
picture
as
well.
A
So
here
we
have
the
consumer
sending
the
i1
interest
so
with
a
regular
prefix
and
also
with
a
reflexive
name
prefix.
So
that's
a
prefix
that
the
consumer
chooses,
and
so
it
was
like,
like
like
significant
uniqueness
and
and
so
when
a
forwarder
gets
a
interest.
Was
a
reflexive
name
prefix.
A
It's
supposed
to
create
this
forward
pit
token
and
install
pit
state
for
that.
So
we
we.
We
have
our
regular
pit
entry,
maybe
with
an
additional
good
additional
field.
A
I
mean
the
different
ways
to
really
implement
this,
and
so
this
contains
the
information
about
the
reflective
name
prefix
and
then,
when
we
get
the
i2
in
interest
later,
we
can
then
use
the
the
same
token
id
to
locate
the
the
pit
entry
and
then
make
a
decision
where
to
like
forward
the
interest
to
so
that
means
the
the
producer
is
receiving
these
interests
and
is
then
kind
of
mirroring
this
forward
pit
token
into
this
reverse
pit
token
field
and
then
sends
the
reflexive
interest
back
to
the
original
consumer,
and
then
the
the
forwarders
would
be
able
to
just
use
this
pit,
look
up
and
and
make
the
decision
where
to
forward
the
interest
to
thomas.
K
This
looks
to
me
as
if
you
assume
that
the
forward
is
on
your
on
your
reflexive
way
back,
so
this
is
with
interest.
Two
are
actually
have
seen
interest
one
absolutely,
but
this
is
absolutely.
F
K
Isn't
obvious
to
me,
because
I
mean
if
you,
if
you're
forward
from
from
the
producer
interest
one
using
the
prefix
x1,
then
you're
using
the
flips
in
between
and
the
hips
are
forward
directing
and
they
are
not.
They
need
not
be
the
same.
I
mean
they.
They
need
to
be
the
mirror
of
the
other
fips
on
the
on
the
on
the
on
the
way
to
the
producer.
B
A
Yeah,
so
this
absolutely
relies
on
symmetric
forwarding
and
if
you
don't
have
it,
then
this
wouldn't
work.
A
J
A
Right
so
previously
we
we
actually
had
to
install,
say
routing
information
in
the
in
the
forwarders,
and
here
I
mean
the
different
ways
to
implement
it.
But
here
basically
the
forwarders
would
see
that
there
is
this
rpt
field,
and
then
this
would
enable
them
to
to
look
up
the
token
in
in
their
pit.
J
A
Path,
tokens
right,
so,
of
course,
this
would
also
require
a
modified
forwarder
behavior,
but
we
think
these
changes
are
more
benign
than
you
know,
manipulating
the
rotting
state
and
maintaining
separate
tables
for
that.
J
Right,
I
guess
I'm
just
reacting
to
the
to
calling
it
a
pit
entry,
because
it's
it's
really
forwarding
an
interest,
it's
not
a
pending.
In
some
sense,
it
is
a
pending
interest,
but
is
it
I
mean
you
see
this
as
the
the
same
data
structure?
Is
that
right.
A
Well
so
this
i2
interest
here
that
goes
back
to
the
consumer.
I
mean
that's.
First
of
all,
it's
a
regular
interest
like,
except
that
the
the
name
that
we
are
asking
for
you
know
is
this
re
reflexive
name
prefix
that
we
received
in
in
the
initial
i1
interest,
and
so,
if
it's
like
mechanically,
it's
it's
a
regular
interest.
It
just
has
an
additional
field,
rpt
field,
and
so
this
enables
the
forwarder
to
determine
okay.
A
Where
do
I
have
to
send
this
to,
but
we
don't
need
to
yeah
disclose
any
say
globally,
routable
name
or
anything.
So
this
is
just
like.
The
rnp
is
just
a
label
essentially
that
that
the
consumer
generates.
A
A
So
these
these
pit
tokens
are
typically
hot
by
hop
extensions
or
about
features
in
the
end
or
the
ccnx.
So
one
question
is
okay:
how
do
these
names
actually
look
like?
So
I
didn't
really
explain
it
well,
but
so
the
idea
is
that
you
would
communicate
a
prefix,
and
so,
under
this
prefix
the
consumer
could
like
provide
different
like
a
set
of
information
like
a
cookie,
username
or
something
it
would
be
different.
A
Different
data
objects,
and
the
draft
also
explains
a
bit
more
about
this,
like
you,
you
could
use
manifests
if
you
have
like
larger
data
sets
and
so
on,
but
the
prefix
itself
well,
it
would
be
a
new
name
component
in
ccnx
and
ndn,
and
it's
essentially
just
a
random
128
bit
number.
A
And
yeah
so
again,
so
you
could
use
this
as
a
prefix
and
then
construct
like
different
types
of
of
names
or
maybe
just
a
manifest.
That
then
refers
to
additional
objects
and
so
on.
So
this
could
also
be
used
for
like
like
several
interactions,
if
you
think
about
say
something
like
web
interaction,
where
you
have
multiple,
maybe
parameters
that
are
needed
for
the
server
to
process.
Your
request.
A
And
yeah,
so,
as
we
mentioned,
so
there
is
of
course
new
node
behavior
required,
so
details
are
in
the
draft
for
consumers,
producers
and
and
forwarders,
and
but
so
we
haven't
really
implemented
it
yet,
but
we
think
that
the
forwarder
modifications
well
are
actually
not
as
invasive
as
as
we
had
it
for
the
like
previous
version.
A
So
you
you
need
this
pit
token
generation
when
you
receive
this
interest
as
the
first
forwarder
on
the
pass
and
when
you,
when
you
see
an
interest
with
this
reflective
name
prefix,
and
so
we
think
this
is
maybe
a
good,
a
good
approach.
If
you
want
to
cater
for
both
like
high
performance,
forwarders
and
just
you
know,
standard
software-based
one.
So
it's
relatively
easy
tournament,
but
it
still,
wouldn't
you
know,
screw
up
performance
in
these
high-performance
scenarios.
A
Yeah,
I'm
not
sure
we
have
to
go
through
these
specifications
so,
but
what
we
provided
encodings
for
both
cc
and
x
and
ndn
so
is
slightly
different
because
they
use
different
mechanisms.
A
And
maybe
let's,
let's
talk
a
bit
about
what
this
could
enable.
So
we
talked
about
remote,
entertainment,
invocation,
restful
and
and
data
pull
from
from
sensors
and
for
rmi.
We
had
this
previous
work
that
we
called
rise:
remote
messenger
application
for
icn,
and
so
there
we
would
yeah.
Also
you
use
this
system
because
we
would
also
have
to
send
some
request
parameters
when
we
want
to
initiate
a
remote
method
invocation
and
just
quickly
the
way
that
it
would
be
used.
A
There
would
be
that
you
send
your
initial
interest
with
the
reactive
name
prefix
and
then
the
the
server
kind
of
fetches,
the
input
arguments
say
one
by
one
and
so
in
this
system.
Then
we
would
have
an
additional
interaction.
You
would
kind
of
return
the
data
object
here
with
a
handle
that
allows
you
to
fetch
the
computation
reset
result
later
so
because
well
this
we
should
support
long,
lasting
computations
and
so
on,
and
this
is
a
typical
example
how
we
think
this
would
be
used.
A
So
often
you
have
additional
things
that
you
need
to
communicate.
So
this
fictitious
and
forwarding
scheme
could
just
be
one
element
in
say
a
more
evolved
communication
pattern.
A
And
just
quickly
for
this
iot
phoning
home
scenario,
so
assume
you
have
some
asynchronously
generated
data
and
you
want
to
get
it
somewhere,
so
you
could
say:
use
the
i1
interest
to
kind
of
notify
your
data
sync
or
database
or
whatever,
and
then
this
would
trigger
this
reflexive
interest
and
then
fetch.
The
actual
data
item.
A
Okay-
let's
maybe
jump
over
this-
you
can
so
this.
So
security
was
kind
of
one
of
the
concerns
here.
So
there's
some
extended
discussion
in
the
draft
and
what
I
think
we
want
to
convey
is
that,
while
we
think
this
could
be
say,
a
key
element
for
making
icn
fit
for
future
web
kind
of
systems.
A
What
is
also
related
here
is
key
exchange.
When
you
want
to
have
something
like
encrypted
communication,
name,
privacy
and
and
these
things
you
also
have
to
convey
parameters,
and
you
also
often
have
to
convey
something
like
a
cookie
that
kind
of
maybe
links
to
your
your
your
session
and
so
on.
So
that's
pop.
A
That's
the
scheme
that
we
think
would
be
useful
in,
say
many
scenarios
that
you
would
maybe
use
this
forwarding
scheme
to
establish
some
state
later
and
then,
when
you
have
something
like
a
session
continuation,
you
could
like
use
a
cookie
in
the
interest.
In
the
say,
what
we
call
I
want
interest
to
just
you
know
refer
to
that
previous
state,
and
so
different
protocols
need
this
or
different
different
interaction.
Styles
need
this
so
web
key
exchange
or
say
secure,
communication
and
so
on.
A
So
that's
something
that
could
be
like
a
general
feature
that
we
want
to
yeah
specify,
maybe
as
an
act
as
a
next
step.
Okay,
let
me
wrap
this
up.
We
have
time
for
one
question:
yeah
christian.
M
Hello,
christine,
you
mentioned
the
problem
of
client
or
consumer
ability
in
the
in
the
context
of
previous
work.
Do
you
have
any
plans
of
addressing
this
here
because,
as
I
understand
this
now
also
doesn't
work
with
the
mobile
client,
but
maybe
the
client
could
kind
of
in
later
interests
send
information
that
would
allow
the
producer
to
use
the
same
prefix
different
prefixes.
A
Yeah
so
you're,
right,
I
mean
say
if
we
have
a
moving
consumer,
so
ins,
so
within
this
whole
interaction.
That
would
also
be
a
problem
of
course,
but
so
what
we
would
you
could
say
here
is
that
so
between
reflexive
say
forwarding
interactions
yeah,
you
could
still
have
the
usual
mobility
because
you
wouldn't
disclose
any
stable
name,
and
so
the
prefix
you,
the
the
record
effects
of
prefix,
would
be
generated
by
the
consumer
for
for
every
interaction
and
so
yeah.
A
So
there
it's
it's
not
as
bad
as
using
these
stable
or
globally
routable
names.
A
Thanks
for
the
question,
okay
yeah,
I
would
be
really
interested
to
get
get
more
feedback.
Please
give
it
to
us
on
the
main
list.
B
Okay
got
my
audio
on
so
just
a
couple
things
to
wrap
up
we're
just
about
out
of
time,
but
I
just
want
to
sort
of
give
people
a
feel
for
what's
happening
in
the
next
period
before
we
get
all
together
again.
So
we
did
some
progress
on
flick
a
few
months
ago
and
it
stalled
again.
We
really
would
like
a
lot
of
things,
are
dependent
on
having
a
manifest
capability
in
the
architecture.
F
B
I
guess
we're
just
sort
of
asking
hey:
is
there
anybody
out
there
who
can
potentially
help
us
here
to
try
and
unstick
flick
and
get
it
to
our
g
last
call,
there's
still
technical
work
to
be
done.
It
isn't
just
you
know,
dotting
eyes
and
crossing
t's
here.
So
that's
number
one
issue,
question
by
ken.
J
Yeah,
I'm
it's
not
a
question,
I'm
just
saying
I'm
in
the
process
of
going
through
it
again,
it's
it's
much
improved,
but
I
want
to
be
able
to
give
substantive
comments.
I
will
do
that.
I
will
bump
that
up
in
priority
and
try
to
get
something
to
the
list
soon.
B
That
would
be
great.
Thank
you.
We're
going
to
take
ping
and
traceroute
irsg
reviews
since
last
call
successfully
closed
and
we
have
an
updated
spec
dirk
is
going
to
do
an
rg
last
adoption
call
on
path
steering
and
please
give
your
feedback
on
on
the
mailing
list
as
to
whether
this
is
appropriate
and
then
two
quick
things
are
our
sort
of
new
work
coming
in
is
at
a
level
lower
than
we
would
like.
B
Our
engagement
has
obviously
been
hurt
by
all
the
covet
situation
and
a
variety
of
things,
but
we
are
not
running
out
of
capacity
to
deal
with
interesting
new
research
work
that
people
would
like
to
bring
to
the
group
for
discussion
and
potentially
work
among
the
participants
in
icnrg.
So
please
think
about
bringing
your
work
in.
B
Computing
fits
in
icn
as
well
as
with
that
and
a
general
question
is
we
will
plan
the
meet
at
itf
114
in
philadelphia
in
july,
but
if
we
have
enough
stuff
going
on
we're
interested
in
your
views
as
to
whether
we
should
have
another
interim
meeting
between
now
and
late
july,
so
that's
it.
I'm
done,
I
think,
dirk,
and
I
both
thank
everybody
for
for
your
time.
B
We
hope
to
have
this
face-to-face
in
philly
and
please
remember.
We
have
a
annual
icn
conference
coming
up
in
september.
Paper.
Registration
deadline
is
a
couple
months
away,
so
you
have
still
time
to
work
on
your
papers.
B
It's
our
premier,
research
venue
for
work
in
the
area,
so
keeping
that
vibrant
and
interesting
is,
of
course
interesting
to
this
whole
community.
So
thanks.
A
Thanks
everybody,
thank
you
and
yeah
see
you
on
the
main
list,
hopefully
check
out
the
discussion
on
low
latency
video
distribution.
I
think
it's
quite
interesting.
What's
going
on
there
at
the
moment.