►
From YouTube: IETF106-ICNRG-20191118-1000
Description
ICNRG meeting session at IETF106
2019/11/18 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
B
C
D
These
are
my
coaches,
Bowie
woman
and
day
for
Ann
and
I'm
their
coach,
oh,
and
before
we
start
so
we
have
an
updated
note,
well
note,
and
so
it's
important
to
be
aware
of
that.
So,
first
of
all,
so
that
this
one
hasn't
really
changed,
but
so
you're
still
expected
to.
Let
us
know
if
you
introduced
IP,
our
CIP
are
in
the
presentations
or
discussions
yeah.
D
Second,
the
IRT
F
is
also
following
the
IITs
policy
so
for
privacy
and
could
of
conduct
so
check
out
these
two,
our
C's
here
and
talk
to
the
Ombuds
team.
If
you
have
any
concerns
or
questions,
and
so
finally
we'd
like
to
remind
you
that
this
is
the
internet
research
task
force.
So
we
are
doing
research
here.
D
We
are
generally
not
doing
standards,
although
sometimes
we
use
the
same
methodology.
So
we
do
drafts
and
at
some
point
our
C's.
Quite
often
the
goal
of
the
work
that
we
do
here
is
just
to
enable
additional.
You
know:
experimentation
people
enable
collaboration
on
certain
topics,
but
it's
really
not
about
producing
standards.
D
Check
out
this.
My
RTF
primer,
if
you
have
questions
on
that,
okay,
so
yeah
I,
see
energy
as
all
the
other
groups
has
a
mailing
list,
a
web
page
data,
tracker,
page
and
so
on,
and
so
for
this
meeting
now
you
to
say
kindly
agreed
to
take
notes.
Thank
you
very
very
much.
It's
not
the
first
time
the
materials
uploaded,
and
this
is
the
agenda
for
today.
So
it's
you
can
see
it's
it's
pretty
packed.
So
we
didn't
have
a
full
day
interim
yesterday,
because
we
just
had
one
in
Macau.
D
D
Okay,
so
then
let's
get
started,
so
we
had
this
interim
meeting
in
Macau
in
September
after
the
ICN
conference
there.
So
the
conference
and
also
our
highest
energy
meeting
was
kindly
hosted
by
the
University
of
Macau.
So
thanks
a
lot
for
that,
and
so
we
had
a
kind
of
interesting
agenda.
So
often
we
use
these
post
conference
meetings
to
do
a
deep
dive
on
uncertain,
interesting,
for
example,
paper
presentations
that
occurred
at
the
conference
or
some
some
in-depth
technical
discussion.
D
So
the
the
background
is
that
so
say
in
the
old
days
when
we,
when
we
were
doing
like
like
video
which
I'm
streaming
and
so
on
I
mean
mostly
concern
was
like
linear
video
and
he
thinks
well
actually.
Today
they
are
much
richer
ways
to
interact
with
multimedia.
So,
for
example,
in
this
say
multimedia
presentation,
environments
or
theater
environments.
D
Where
Jeff
works,
you
are
mixing
constantly
mixing
different
sources
and
so
on,
and
so
the
idea
that
Jeff
and
his
team
had
or
to
provide
a
good
networking
support
and
could
api's
for
that,
and
so
they
actually
integrated
in
the
end
in
a
commercial
tools
or
touch
designer
by
a
canadian
company
called
derivative
and
so
gave
this
to
designers
or
researchers
in
that
space
and
so
for
experimentation.
And
that
was
a
really
cool
workshop
and
Jeff
gave
us
a
summary
so
check
out
his
slides
from
the
interim.
If
you
interested
in
that.
D
Shank
did
an
update
on
I
see
an
open
I'm,
not
gonna
talk
about
this
now,
because
Thomas
is
talking
about
this
later
and
Chang
also
talked
about
quality
of
service
for
ICN
in
the
80s.
So
there's
a
draft
in
IC
energy
about
these
ideas,
and
so
that
team
also
had
papers
and
demos
at
the
at
the
conference
and
so
in
general.
D
Approach
so
you
can
actually
also
maybe
be
a
bit
more
expressive
about
classifying
flows
and
so
on,
and
so
especially
in
the
constrained
IOT
environment.
It
makes
a
lot
of
sense
to
look
at
resource
control
quite
carefully,
and
so
this
work
at
a
case
where
they
wanted
to
support
really
critical
IOT
communication.
So,
like
life-saving
alarm,
signals
for
example-
and
so
they
looked
at
you
know,
what
can
you
do
in
an
isolated
fashion
on
one
node?
So
how
you
deal
with
the
different
resources?
D
So,
for
example,
we
had
a
presentation
from
the
Hong
Kong
Applied
Science
and
Technology
Research
Institute
a
stream,
and
they
are
developing
in
a
ndn
based
system
for
a
smart
water
meter,
collection
in
in
Hong,
Kong
and
so
had
talked
about
their
prototype
implementation,
and
so
they
are
their
ideas
on
that
and
Peter
kids
man
from
from
riot
invited.
The
team
gave
us
an
update
about
the
Laura
support
in
in
riot.
D
So
there
is
a
proprietary
code
by
Centex,
so
the
basically
inventors
of
the
Laura
Phi
protocol
and
so
that
that's
integrated
into
riot
but
riot
is
also
working
on
a
bit
more
elegant
implementation
of
Laura
into
there.
The
general
networking
stack
so
basically
Laura
is
supported
quite
quite
well
and
right.
D
That's
good
news
for
people
who
want
to
do
experiments
also
ICN
experiments
in
that
space,
and
so,
after
this
kind
of
more
presentational
ancient
discussions,
we
had
a
in-depth
discussion
about
some
design
opportunities
or
designer
alternatives
for
doing
ICN
in
Laura
or
over
Laura,
and
so
I'm
not
going
to
repeat
this
right
now.
So
we
probably
continue
this
and
we'll
be
able
to
tell
you
more
about
this
in
at
the
Vancouver
meeting,
if
you're
interested
in
participating,
we
just
used
the
icy
energy
mailing
list,
so
please
contact
me
or
command
use
the
main
list.
D
Interesting
update
about
his
thoughts
about
push
based
communication
so
push
it
update
to,
and
so,
if
you
have
been
here
before
you,
maybe
you
remember
an
earlier
presentation.
We
talked
about
the
general
idea,
and
so
the
motivation
to
four
is
doing
like
a
hand,
only
push
communication,
and
so
this
time
he
had
some
more
information
about,
say
some
protocol
design
ideas.
D
A
A
So
this
talk
is
about
some
characteristics
of
flow
balance
that
are
desirable
versus
undesirable
and
how
we
might
make
some
small
changes
to
algorithms
and
protocols
to
improve
the
performance
of
a
flow
balance
based
flow
in
congestion
control
scheme.
So
this
draft
has
been
around
for
a
few
months,
haven't
gotten
very
much
in
the
way
of
comments
on
it.
A
A
It's
a
fixed
size
field,
so
the
maximum
theoretical
l3
MTU
is
64
kilobytes
same
as
UDP
or
TCP
in
the
IP
world.
So
if,
if
the
problem
you're
trying
to
solve
is
how
do
you
fit
this
into
the
l2
and
to
you?
You
need
fragmentation
protocols
in
the
case
where
the
l3
and
the
N
or
16x
message
is
bigger
than
an
l2
em
to
you.
A
I'll,
probably
screw
up
right,
okay,
so
what's
the
problem
here,
the
problem
is:
is
the
small
data
objects
in
terms
of
constructing
your
ICN
level?
Data
packet
are
inconvenient
for
a
number
of
applications,
because
the
natural
object
size
you
want
to
deal
with
is
larger
than
a
link
MTU.
So
things
like
video
frames
are
larger
than
an
MTU.
A
Various
other
types
of
say
row
of
a
database
table
or
any
number
of
data
structures
are
larger
than
an
MTU.
On
the
other
hand,
you
also
have
applications
with
very
small
data
objects,
such
as
voice
over
IP
audio
data
frames,
which
can
be
as
small
as
20
bytes
sensor,
readings
which
conceivably
could
be
even
small,
or
you
know,
2
3,
4
bytes,
and
if
you
don't
know
how
big
a
l-3
data
packet
coming
back
at
you
when
you
issue
an
interest,
is
how
do
you
do
the
resource
allocation
in
a
reasonable
way?
A
What
people
have
been
doing?
If
you
look
at
the
existing
resource
allocation,
algorithms
that
have
been
implemented
and
and
measured,
they
make
very
conservative
resource
allocation
decisions
and
assume
MTU
size,
packets,
at
least
l/2,
and
to
size,
packets
and
in
some
cases,
even
l3,
and
to
you
size,
packets
and
then
there's
some
per
packet
crypto
overhead.
In
terms
of
how
much
you
have
to
hash
on
every
packet
and
what
you
include
in
the
packet
in
the
absence
of
manifests,
which
makes
things
bigger
than
you
might
like
to
this
is
equivalent
to
the.
A
How
do
I
size
a
data
packet
when
I
put
it
into
a
VPN
tunnel
because
I
might
you
know
the
native
data
might
fit
on
the
link,
but
then
I
add
the
extra
overhead
and
I've
spilled
over
and
I
need
fragmentation.
So
why
don't
we
solve
this
whole
flow
balance
problem
with
fragmentation?
There's
a
bunch
of
different
fragmentation
schemes.
I
won't
go
into
what
they
are.
You
can
do
end
to
end.
You
can
do
hop
by
hop.
A
You
can
do
hop
by
hop
with
cut
through
any
of
these
things,
but
the
bottom
line
is
that
this
doesn't
actually
solve
the
flow
balance
problem,
because
you
still
have
to
allocate
buffer
and
link
bandwidth
at
each
intermediate,
hop
because
the
stateful
forwarding-
and
you
have
to
set
that
aside
for
the
maximum
size
objects,
because
if
you
don't
do
that
you
it.
This
is
similar
to
the
fragmentation
problem
in
IP
you're,
going
to
get
congestion
collapse
on
overload,
because
you
can't
put
any
single
thing
back
together
again
in
a
reasonable
way.
A
So
here
are
some
design
considerations
that
led
to
the
scheme
that
I'm
proposing
here.
So
you
once
some
means
to
allocate
link
bandwidth
for
data
messages
with
an
upper
bound
larger
than
the
path
MTU
and
a
lower
bound
lower
than
a
single
link.
Mtu,
you
want
to
at
least
handle
moderate
sized
objects
and
I
sort
of
took
the
you
know
the
C
CNX
view
of
the
world,
which
is
it's
unlikely.
You
would
ever
want
to
have
an
l-3
MTU
greater
than
64
kilobytes,
as
opposed
to
really
big
ones
and
finding
the
right
trade-off.
A
So
that
means
it's
really
useful
to
know
when
an
interest
message
arrives,
how
much
data
you
expect
to
come
back
based
on
the
arrival
that
interest
message.
So
the
solution
is
actually
super
simple.
You
just
add
a
TLD
to
the
interest
message
saying
this
is
how
big
I
expect
the
data
coming
back
for
this
interest
message
to
be,
and
you
use
that
to
calculate
bandwidth
allocation
for
the
return
hop
instead
of
just
counting
oil
interests.
Equally,
so
that's
super
simple
only
it
isn't.
A
So
there
are
a
number
of
problems
here.
One
is
well.
How
do
you
know
the
size?
You
know
what
are
you
putting
the
interest
message
well
in
a
lot
of
cases
for
a
lot
of
applications.
This
is
actually
pretty
easy
right
for
sensor
or
other
Internet
of
Things
applications.
The
data
is
instrument
readings.
These
are
known
by
the
applicant,
their
fixed
size,
they're
known
in
advance
by
the
application
protocol,
and
you
can
just
you
know,
tell
the
l3
you
know
this
is
temperature.
Reading
is
four
bytes
long
right
in
video
streaming.
A
A
Almost
all
the
known
vocoders
have
fixed
size
frames
which
are
negotiated
at
the
start
of
the
session.
So
you
know
before
any
data
is
exchanged,
whether
you're,
you
know,
if
you're
doing
LP
kelp,
you
know
they're
going
to
be.
You
know,
somewhere
between
10
and
20
bytes
per
sample
all
right,
but
that
doesn't
cover
everything.
Sometimes
you
don't
know
right
and
what?
If
the
consumer
has
to
guess
so
we
need
to
deal
with
consumers
that
guess,
and
if
you
allow
consumers
to
guess,
you
need
to
deal
with
both
honest
consumers
and
malicious
consumers.
A
So
the
second
problem,
if
you
have,
if,
if
if
the
user
is
guessing
or
something
that
you
thought
you
know
what
you
was
right
was
wrong,
the
data
could
be
a
lot
bigger
than
the
estimate
and
if
that's
the
case,
the
data
can
result
in
two
things.
First,
you
can
get
unfair
bandwidth
allocation
where
you've
allocated
resources
for
for
this
much.
But
this
much
comes
back.
A
So
in
the
worst
case,
you
can
actually
amplify
congestion
by
sending
back
a
lot
more
data
than
you've
allocated
the
resources
for
so
you
have
some
number
of
choices
here.
I
won't
spend
much
time
on
them.
You
can
forward
it
any
way
which
is
safe
if
you're
not
congested,
but
that's
unfair
and
unstable.
When
the
link
gets
congested.
You
can
forward
the
data
when
the
links
uncongested
but
suppress
the
interest
and
reject
it.
When
the
link
is
congested
or
you
could
say
well,
you
know
people
miss
estimating
their
nefarious
characters.
A
So
if
you,
if
you
ask
for
too
much,
you
just
drop
the
data
as
a
way
of
punishing
the
user
for
the
Mis
estimate,
but
if
you're
going
to
do
that,
you
need
some
kind
of
feedback,
though,
and
you'll
notice
that
in
the
protocol
proposal
that
I'm
making
there
is
a
an
error
code
that
can
come
back
on
an
interest
if
you
drop
the
interest
or
their
data
coming
back
due
to
due
to
the
Mis
estimate
being
too
big.
So
the
next
problem
is
well
the
inverse
problem.
A
The
data
is
too
small,
so
you
don't
cause
any
congestion
by
overestimating
the
size
of
the
data
coming
back,
but
resources
get
inefficiently
allocated
because
not
all
the
set-aside
bandwidth
is
actually
used
for
the
returning
data.
So
there's
some
things
you
could
do
here
too.
One
is
to
well,
let's
just
ignore
this
problem,
because
you
know
you,
if
you're
doing
stochastic
resource
allocation
on
the
return
link,
somebody
will
get
to
use
it
any
way.
A
You
could
account
for
the
usage,
according
to
the
larger
data
size
that
somebody
could
have
said
and
then
you're
sort
of
penalizing
people
in
the
domain
of
their
flow
control.
For
that
Mis
estimate
or
you
could
attempt
to
adjust
congestion
control
parameters
around
this,
it's
a
little
too
more
detailed
for
the
talk
here.
A
Look
at
the
draft
for
that
and
then
problem
number
four,
the
one
that
always
rears
its
head
in
any
of
these
architectures
like
Indiana
and
CCN,
which
is
interest
aggregation
because
multiple
interests
can
be
aggregated
at
the
same
point
and
what
if
one
of
them
says
the
data
is
10
kilobytes
and
the
other
one
says
the
data.
Is
you
know
64
kilobytes?
What
do
you
do?
How
do
you?
How
do
you
manage
this
and
again,
the
draft
has
a
bunch
of
possible
approaches
to
this.
I
won't
spend
much
time
going
through
the
details
here.
A
Just
read
it
in
the
draft.
There's
some
reasonable
approaches
to
deal
with
interest
aggregation
here
by
the
way
anytime,
you
try
to
do
something
clever
in
either
nd
an
or
CCN
you're
gonna
run
up
against
what
am
I
going
to
do
about
interest
aggregation.
So
we
see
this
in
the
quality
of
service
discussion
between
ant
later
as
well,
and
then,
of
course,
you
have
malicious
actors,
you
have
people
who
are
intentionally
overestimating.
A
They
decides
with
the
goal
of
preventing
other
users
from
using
the
bandwidth
so
or
consumers
intentionally
under
estimating
data
size,
and
the
goal
is
to
sort
of
have
the
interest
processed
while
other
aggregated
interests
are
not
processed,
so
you
can
have
one
users.
Interests
interfere
with
another
users,
interest
denying
them
access
to
the
data.
So
the
simplest
approach
here
very
much
the
one
I
recommend
is
this-
is
effectively
an
interest.
Flooding
attack
deal
with
it.
A
The
way
you
would
deal
with
any
interest,
flooding
attack,
there's
some
more
sophisticated
things
you
could
do,
but
there's
some
additional
computational
cost
in
terms
of
tracking
the
state
necessary
to
distinguish
a
malicious,
Fermin,
honest
use
of
the
scheme.
So
from
a
protocol
proposal
point
of
view.
This
is
really
simple:
there's
a
proposed
encoding,
it's
a
single
new
TLV.
You
stick
the
expected
data
size
in
at
CLV
and
the
interest
message.
It
goes
along
with
the
interest
message.
A
Any
forwarder
can
look
at
it
and
use
it
in
order
to
be
able
to
allocate
ban
with
for
the
returning
data
based
on
a
byte
count,
which
is
what
most
congestion
control
algorithms
would
like
to
do,
rather
than
pessimal,
E
or
stochastically,
based
on
I,
a
assume
them
to
you
size
and
that's
the
end
of
my
talk.
Thank
you.
Ok,.
A
Going
once
how
many
people
read
the
draft
yeah?
Ok,
that's
probably
all
right.
So
the
goal
I
have
here
is
to
have
you
find
the
sufficiently
interesting
or
scary
one
way
or
the
other
that
you
go
read
the
draft.
Then
you
tell
me
how
stupid
the
ideas
or
how
wonderful
the
idea
is
or
what
we
should
do
with
this
going
forward.
Thanks,
ok,.
A
Of
topic
so
I'm
going
to
talk
about
an
update
to
a
draft
that
has
been
around
pretty
much
since
Genesis.
You
know
the
creation
of
Indiana
and
CCN,
which
is
a
method
for
for
doing
manifests,
which
are
ability
to
describe
collections
of
ICN
data
objects.
So
the
background
is
that
this
has
been
around
for
a
very
long
time.
The
design
was
done.
I,
don't
even
remember,
probably
2014
sometime
like
that
around
the
time,
the
CC
next
one
dot,
o
Christian
shootin
for
CCN
light
came
up
with
with
an
early
version
of
this.
A
That's
how
it
got
the
acronym
name,
click
and
in
terms
of
the
protocols
were
dealing
with
manifests
are
useful
in
in
Indiana,
in
the
sense
that
you
don't
have
to
use
them.
You
could
put
everything
you
needed
in
every
single
data
packet,
but
they're,
critical
in
CC
NX,
because
c
c
NX
is
heavily
biased
toward
not
having
all
the
state
that
you
need
to
process
a
data
message
in
each
individual
data
message,
but
assume
that
you
can
get
the
common
pieces
of
information
through
an
aggregated
data
structure
like
a
manifest.
A
So
the
examples
here
are
cc.
Nx
is
heavily
biased
toward
having
what
are
called
nameless
objects,
which
are
just
the
implicit
hash
of
the
of
the
of
the
name
together
with
the
data
it
can
need
it
in
order
to
take
large
objects,
logical
objects
and
segment
them
into
data
packets
of
various
size,
and
it's
also
needed
if
you
want
to
express
collections
of
objects,
the
equivalent
of
like
filesystem
directories
or
database
tables.
So
it's
been
implemented,
it's
been
around
for
a
long
time.
It's
been
in
use
in
real
applications
for
a
long
time.
A
So
where
are
we
well?
The
current
state
of
affairs
is
that
the
draft
expired
in
2018.
The
original
authors
have
gone
on
to
greener
pastures,
pretty
much
and
don't
have
time
to
really
do
much
work
on
it.
So
I
sort
of
cajoled
two
of
the
existing
authors
to
kind
of
resurrect
the
work,
and
we
had
a
meeting
in
Montreal
to
sort
of
say,
hey
what?
How
do
we
get
this
sort
of
going
again
and
we
got
some
a
little
bit
of
help
from
some
but
mark
Moscow
and
I
basically
said
hey.
A
Let's
just
go
get
this
going.
It's
a
critical
piece
of
technology.
We
need
for
both
of
these
designs.
So
after
some
work
we
got
finally
got
a
new
version
of
the
draft
published
a
few
weeks
ago.
So
it's
a
lie.
However,
you
say
it's
now
alive
again,
so
some
things
haven't
changed.
Flik
still
uses
this
notion
of
a
hash
group,
which
is
a
group
of
object,
caches
that
represent
individual
data
packets.
A
A
It
still
has
metadata
in
the
manifest
that
tells
you
how
to
interpret
the
manifest
using
a
relatively
simple
automaton
to
traverse
the
manifest,
and
it
also
has
this
nice
property,
which
I
think
is
underappreciated
is
that
you
can
encrypt
the
manifest
using
different
he's
from
the
ones
that
are
used
to
encrypt
the
data
and
the
reason
why
this
is
useful
is
that
mana.
You
may
want
third
parties
and
intermediaries
to
actually
be
able
to
interpret
a
manifest
to
do
various
types
of
optimizations
without
actually
giving
them
the
data
encryption
keys
for
the
underlying
data.
A
So
what
has
changed
quite
a
bit?
Actually,
since
we've
learned
a
lot
since
in
the
last
year
and
a
half
since,
since
the
work
went
idle.
So
the
first
thing
we
did
is
we
added
this
concept
of
namespaces
so
that
you
could
really
describe
how
the
naming
conventions
for
sub
manifests
and
data
objects,
work
right
and
the
prior
one
just
sort
of
assumed
everything
was
a
nameless
object.
A
You
couldn't
have
a
manifest
pointing
to
a
list
of
entries,
each
of
which
had
a
name
right,
so
we've
defined
the
three
namespaces
name:
space
opera,
nameless
operation,
which
is
2.2
Hache
objects.
A
single
prefix
which
says
that
everything
under
the
name
of
the
manifest
is
a
lower
level
on
next
level,
hierarchical
name,
component
or
segmented
prefix,
where
every
name
is
unique.
So
you
can
do
all
three
of
those
things.
A
Obviously
the
first
is
the
most
efficient
since
the
stores
the
least
amount
of
data
in
the
manifest
the
next
one
is
sort
of
intermediary,
where
you're
sharing
the
prefix
among
the
other
objects
and
the
last
one
is
you
know
if
neither
of
those
things
holds,
you
can
still
construct
the
manifest
for
a
random
collection
of
objects,
so
each
hash
for
group,
when
the
manifest
can
use
its
own
namespace,
so
the
application
and
manifest
namespaces
for
where
you're
pointing
the
things
can
be
different
from
each
other.
The
second
thing
that's
changed
is
we?
A
How
can
I
put
this?
This
sounds
really
different.
It's
really
not
all
that
different,
but
there
have
been
some
changes
to
the
encryption
scheme.
I
think
these
are
all
pretty
much
for
the
better
there's.
No
before
the
encryption
scheme
had
something's
not
covered
by
the
encryption,
and
hence
various
pieces
of
metadata
would
leak
so
we've
redone
the
the
syntax
of
the
manifest
such
that
no
information
leaks
are
since
it's
all
encrypted
under
the
key
that
encrypts
the
manifest,
and
it
was
done
in
such
a
way
that
you
didn't
have
to
do
data
copies.
A
A
If
anybody
is
using
that
we'd
like
to
know
because
we're
gonna
take
something
away
from
you
and
now
there's
only
one
encryption
key
for
the
for
an
entire
manifest
our
manifest
tree,
we
specified
in
detail
how
you
do
pre
shared
key
encryption
and
group
key
encryption
for
the
manifests
group.
Key
encryption
is
probably
the
most
useful
for
a
data
structure
like
this
in
terms
of
processing
the
keys
and
the
actual
encrypt
decrypt.
That
turns
out
that
they
devolve
to
the
same
underlying
code
that
you
need
to
make
it
work
and
we
added
extensibility
mechanisms.
A
So
both
the
encryption
mechanism
and
the
key
location
mechanism
for
getting
the
keys
to
decrypt
the
manifests
are
extensible.
So
we
can
add
new
key
location
capabilities.
We
can
add
new
encryption
schemes.
Second
thing:
that's
changed
is
we've
enhanced
the
metadata,
so
the
manifest
metadata
is
sort
of
completely
refactored.
A
So
if
there's
any
manifest
level,
mech
and
metadata
you'd
like
to
add
to
something
hints
about
video
coding,
if
it's
for
video
objects
or
hints
about
time
series
semantics
of
sensor
data
ratings-
something
that's
application
specific
you
can
put
stuff
in
the
manifest
rather
than
having
to
go
out
and
actually
fetch
data
objects
for
the
application.
In
order
to
discover
those
sorts
of
things,
we
also
add
a
metadata
on
the
pointers
in
the
hash
groups.
A
So
now
you
can
have
annotated
pointers
as
well
as
plain
pointers
and
the
annotated
pointers
allow
metadata
and
extensions
on
every
pointer.
So
a
good
example
here
which
reflects
back
on
my
previous
talk,
is
that
you
can
put
size
information
on
every
pointer
so
that,
just
by
parsing
the
manifest
you
know
what
to
put
in,
for
example,
one
of
these
expected
data
size
things
when
you
go
to
fetch
something.
You
can
also
use
this
nicely.
A
If
it's
a
linear
data
object
like
a
large
file,
you
can
use
this
for
seeking,
because
once
you
have
the
manifest,
you
know
the
length
of
everything,
and
you
know
where
to
go
to
get
a
particular
byte
in
a
linear
data
structure.
You
can
also
do
hints
for
traversal
order.
If
you
have
a
video
application,
where
you
would
like
to
fetch
things
in
an
order
different
from
the
order
in
which
the
encoder
produced
them
decoding
hints
and
all
sorts
of
other
things,
so
it
the
only
thing.
A
We've
added
to
the
actual
spec
is
the
size
extension
for
annotated
pointers.
But
it's
extensible.
You
just
register
tlvs,
so
number
of
miscellaneous
changes,
locators,
which
are
names
that
are
topologically
sensitive.
We
can
have
multiple
names
for
the
same
underlying
data
based
on
a
manifest.
These
can
now
be
an
array,
as
opposed
to
just
a
single
locator.
A
So
if,
for
example,
you
have
a
set
of
data
that
you
want
to
name
with
a
manifest,
then
you
want
the
names
to
allow
you
to
stick
producers
in
in
the
infrastructure
of
multiple
service
providers,
such
that
the
actual
names
are
routed
topologically
with
the
service
provider,
but
still
maintain
all
the
security
and
keying
of
the
application.
You
can
do
that.
There's
a
lot
more
detail
on
the
draft.
A
That's
expanded,
probably
to
twice
the
previous
size,
there's
a
lot
more
explanatory
material
and
we
now
have
a
Python
implementation,
which
is
not
exactly
the
whole
draft,
but
pretty
close.
So
we
we've
added
to
the
implementation
enough
for
you
to
start
working
with
it.
There's
still
a
few
things
to
be
done,
we're
not
quite
ready
to
last
call
this.
A
We
intend
to
do
some
more
work
between
now
and
the
end
of
the
year
to
update
the
two
implementations
and
get
the
python
implementation
up
to
date.
We're
missing
an
eye
on
a
consideration
section,
which
is
just
mechanical
stuff,
that
we
need
to
register
things
and
we
don't
really
have
a
security
consideration
section,
which
is,
in
fact
the
big
problem,
and
that
is
sort
of
like
needs
to
get
get
written.
So
we're
done.
Thank
you
cool.
Thank
you.
Our
students
questions
questions.
E
D
A
D
D
So
we
as
chairs,
we
really
like
to
encourage
people
reading
this
draft,
and
so,
if
you
are
working
on
any
ICN
implementation,
please
look
at
it
very
carefully
would
be
great
if
you
could
even
consider
implementing
it.
So
again,
it's
it's
a
key
piece
of
technology.
It
it
can
do
much
more
than
you
know,
just
enabling
the
use
of
collections.
It's
all
cut
out
to
be
a
your
tool
of
supporting
a
better
cooperation
between
notes
and
the
networks,
as
they've
pointed
out.
So
it's
would
be
good
to
have
more
people
looking
at
this.
Okay.
G
A
H
H
Okay,
this
is
for
update
on
a
nurse
document
I'm
Jonah
from
a
tree.
We
have
a
to
talk
mint
on
NRS.
One
is
the
design
guidelines
from
NRS
in
ICN,
and
the
other
is
the
architecture
considerations
of
ICN
using
an
RS
and
the
first
one
has
been
the
title
of
the
first
one
has
been
changed
it
from
the
requirement
to
the
design
guidelines.
We
haven't
discussed
the
according
to
the
requirements
for
a
long
time,
but
last
among
it
last
meeting
in
Montreal.
We
have
updated
and
changed
it.
A
title
to
the
design
guidelines
and.
H
We
asked
for
the
RG
last
comfort
post
document
in
Montreux,
and
we
have
we
got
some
comments
through
in
in
the
mailing
list
and
I
just
copied.
The
Davis
comment
for
the
two
documents
and
the
person
on
the
design
guidelines.
The
technical
comments
may
require
a
second
last
call,
but
the
other
architecture
consideration
documents.
There
are
some
technical
and
editorial
comments,
but
those
are
those
can
be
dissolved
without
without
requiring
a
second
call,
and
these
are
a
scoffer
bulls
document.
H
The
first
one
design
guidelines
focuses
on
the
NRS
itself
as
a
service
or
as
a
system
in
ICN,
and
it
provides
the
LRS
overview
and
LS
functionalities
and
anesthesia
guidelines
as
well
and
security
considerations
and
the
other
one.
Architectural
considerations
focus
on
things
related
to
the
ICL
architecture.
So
it
describes
a
how
I
see
an
architectural
change
and
what
implications
are
introduced
within
the
ICN
routing
system
when
NRS
is
integrated
into
ICN.
H
So
when
you
look
for
the
and
iris
example,
you
have
to
see
the
first
strap
and
but
when
the
second
one
is
a
related
ICM,
but
is
focused
on
the
ICN
architecture,
not
the
eyes
and
iris
itself.
So
these
are
the
difference
of
the
two
document
and
we
added
what
we
added
in
the
introduction
across
explanation
of
the
two
documents,
which
was
the
comments
from
day
and.
H
For
the
design
guidelines
document,
we
tried
to
adjust
all
the
comments
from
tables
and
I'm.
Very
sorry
that
I
haven't
send
out
the
how
we
address
how
how
we
address
the
comments
by
the
mailing
list,
but
I
explained
me
a
little
bit
for
now
and
then
I
try
to
send
that
out
as
soon
as
possible
during
thirty's
ITF
meeting.
So
please
look,
please
take
have
an
intention
on
the
mailing
list
and
give
us
a
good
comments
as
well,
and
we
mostly
accepted
and
agree
with
our
Tabas
comment
but
discovered
content,
discovery
technologies.
H
We
he
did
suggested
change
it
to
the
information
or
a
Content
request,
but
we
change
it
to
her
2d
content
to
recast
the
routing.
The
reason
they
actually
the
the
part
that
content
is
covering
written
was.
We
explained
how
I
share
Aurora
step
work.
Actually
it
was
quoted
from
the
ICN
research
challenge
ERF
sees
in
there.
H
What
is
en
robin
step
is
one
is
a
lame.
The
first
is
named
as
a
motion
and
second
is
the
discovery,
and
third
is
the
delivery,
but
we
instead,
we
just
copied
that
those
tong.
So
we
put
the
contents
in
front
of
the
discovery
and
the
delivery,
but
in
a
I
think
it
made
some
confusion
on
the
using
that
along
leads
of
the
discovery.
So
our
intention
was
the
content
to
recast
the
Rory,
so
we
changed
it
and
there
are
some
few
part
that
we
use
the
content
discovery.
H
Originally
we
try
to
change
it
all,
but
we
miss
the
two
part,
so
we
fix
them
in
the
next
revision
and
similar
comment
from
the
mark.
He
what
his
comment
was.
It
would
be
good
to
clarify,
discovered
turn
in
this
document,
but-
and
he
also
mentioned
the-
there
are
two
discovery
which
is
in
the
one
is
lammle.
They
discovery
and
the
other
is
content
discovery.
H
But
we
consider
both
because
this
in
this
document
we
try
to
consider
any
type
of
NRS
to
show
the
possible
functionality
of
the
NRS
and
but
the
discovery
party
is
not
the
main
issue
of
this
document
as
well.
So
we
deliver
the
the
Tom
and
also
this
discover
each
terminology
has
not
been
defined
even
in
the
terminologies
document.
H
He
gave
us
a
lot
of
comment
which
was
good,
but
that
was
a
good
discussion
for
ICM,
but
it
was
a
little
bit
out
of
scope
of
this
document.
So
we
accepted
some
of
his
point,
but
we
realized
some
but
also
I
said
how
we
addressed
and
what
was
the
out
of
the
scope
in
the
pie
in
the
Melanie's
as
soon
as
possible
and.
H
Architecture,
considerations,
documents:
there
was
only
Davis
comments
and
we
are
deflected
and
but
we
couldn't
quite
completed
all
his
comment
for
this
revision.
So
what
we
trying
to
do
is
we
will
complete
it
as
soon
as
possible
and
submitted
soon
again,
but
and
for
now,
I
only
mentioned,
but
one
of
his
comment
on
NRS
caches,
so
we
assumed
and
I
said,
reserver
have
cache
it.
It
was
because
Anna's
cache
it
could
be
helpful
for
or
such
a
case
of
the
live
streaming
service
E
and
for
like
a
time
on
critical
service.
H
You
know
you
don't
have
to
do
the
name
that
is
reserved
every
time.
So
for
that
case
we
assume
the
and
as
caches,
but
because
of
that
there
is
a
architectural
considerations
or
cash.
Yes,
when
you
even
use
the
cash
there
is
always
of
issues
on
cash,
but
which
is
not
really
directly
related
to
the
analyst
issues.
I
I
H
J
Stew
card:
my
question
has
to
do
with
the
earlier
draft:
the
design
guidelines
in
security,
concession,
Siri
security
considerations,
6.1
accessibility,
second
paragraph:
the
NRS
may
support
access
control
for
certain
name
records,
so
the
only
users
and
producers
within
the
proper
lists
can
access
these
records,
so
that
assumes
the
traditional
access,
control
list,
model
of
access,
control
and
I-
think
name.
Resolution
naturally
Maps
better
to
the
capability
based
model
of
access
control,
since
essentially
these
are
pointers,
references,
etc.
J
D
J
D
C
C
H
Well,
we'll
consider
well
your
comments.
Thank
you.
D
F
D
B
A
Well,
what
we're
getting
this
slides
up?
Let
me
preface
to
talk
to
say
that
this
is
work
based
on
a
paper
that
Cinco
and
I
published
in
ICN.
Seventeen
that
turns
out
perhaps
has
more
than
just
research
more
more
than
just
a
little
bit
of
interest
in
the
community.
So
we
decided
that
maybe,
rather
than
just
sort
of
you
know,
go
read
the
paper.
A
We
would
take
it
a
step
forward
and
explain
a
bit
about
how
we
might
want
to
do
this
as
part
of
the
CC
NX
and
Indian
Protocol
architectures,
and
so
people
could
actually
use
this
to
move
forward
on
on
application
development
rather
than
just
you
know,
seeing
some
research
results
so
think
of
this
as
a
proposal
to
enhance
the
protocols
using
the
work.
There's
for
those
of
you
who
have
read
the
ICN
paper,
you
can
go
to
sleep
because
there's
practically
nothing
here
that
isn't
in
the
paper.
A
A
A
Great
alright,
so
quick
outline
I'll,
give
you
a
bit
of
introduction
to
the
background
of
path.
Steering
I'll
talk
about
the
design
of
this
path,
steering
scheme
a
bit
about
the
packet
encoding,
and
when
you
do
things
like
this,
you
you,
you
may
have
some
additional
security
considerations
over
pure
longest
name,
prefix
match
Spile
forwarding.
So
so
here's
the
problem
statement.
A
However,
we
have
no
mechanism
for
consumers
to
affect
the
selection
of
which
path
among
the
feasible
paths
gets
used
for
any
given
interest
data
exchange,
so
the
forwarders
can
spray
the
packets
on
the
various
paths
and
when
failures
occur,
it's
kind
of
hard
for
the
consumers
to
actually
figure
out.
A
What's
going
on,
they
may
be
getting
performance
glitches,
because,
if
the,
if
the
packet
goes
over
one
path,
that
hits
failure
of
some
sort,
if
it
happens
to
be
selected
for
another
path,
that
doesn't
you,
you
have
a
very
difficult
time
figuring
out
why
your
performance
is
what
your
performance
is.
So
some
of
the
motivations
for
employing
the
ability
for
consumers
to
steer
packets
on
the
path
is
to
monitor
and
troubleshoot.
A
All
these
multipath
network
connectivity
problems
all
right
which
don't
exist
so
much
in
IP,
because
IP
basically
only
supports
ecmp
and
not
general
multipath
and
also
doesn't
support
multi
destinations,
since
the
address
is
expressed,
are
either
unicast
or
anycast
addresses
and
multicast.
An
IP
is
a
whole
different
kettle
of
fish,
so
you'd
like
to
be
able
to
do
that
and
have
tools
like
the
equivalent
of
ping
and
traceroute
in
IP
in
order
to
diagnose
problems.
A
Secondly,
in
order
to
measure
the
performance
of
a
path,
you
need
to
be
able
to
send
multiple
probe
packets
down
that
path,
to
figure
out
what
the
properties
of
that
path
are
from
a
performance
standpoint,
and
if
you
can't
control
whether
a
given
interest
message
gets
allocated
onto
a
particular
path,
you
have
no
way
to
do
any
kind
of
fine
grained
performance
measurement
of
your
pads.
A
My
particular
interest
in
getting
into
this
work
was
because
of
the
work
I
did
over
a
few
years
on
multipath
congestion
control,
because
some
of
the
more
sophisticated
multipath
congestion
control
algorithms
actually
require
you
as
a
consumer
to
count
the
number
of
available
paths
uniquely
identify
those
paths
and
allocate
traffic
according
to
the
capacity
of
those
paths
proportionally.
So
there's
a
number.
A
The
first
question
you
have
to
ask
is:
how
do
you
label
the
pads
and,
from
our
point
of
view
on
the
mental
model
you
might
want
to
have,
which
is
a
little
dangerous,
is
something
like
source
routes
or
MPLS,
LSPs
or
segment
routing
enumerations.
A
It's
not
exactly
the
same,
so
be
cautious,
but
at
a
high
level
we're
basically
constructing
a
data
structure
that
contains
information
about
every
hop
on
a
particular
path.
So,
in
the
paper
we
publish,
we
examined
a
number
of
possible
ways
to
encode
this
information,
including
bloom
filters,
and
this
very
clever
thing
called
canter
pairing
functions
which
go
back
to
go
back
to
your
CS
days
and
look
at
them
they're
very
cool.
They
wound
up
only
working
for
short
pads
for
us,
but
very
cool
data
structure.
A
A
label
stack
similar
to
MPLS,
but
we
chose
instead
to
use
fixed
size
labels
in
a
polynomial
style
encoding
for
what
we
propose
to
put
in
the
protocol.
It
seems
that
the
best
trade-offs
in
terms
of
flexibility,
size
and
processing
overhead,
so
the
way
it
works
is
pretty
straightforward
and
interest
contains
a
path.
Level
marked
is
something
called
discovery
mode
and
is
forward
forwarded
normally
through
the
longest
name,
prefix
match
in
the
FIB.
A
When
a
data,
when
a
content
message
is
data,
message
comes
back,
it
carries
the
path
label
which
has
been
modified
on
every
hop
to
add
information.
And
what
is
the
information?
That's
added
its
whoops.
The
information
is
a
next
hop
label
for
every
next
hop
that
a
future
interest
would
would
do
so.
Then,
once
you
get
back
this
data
structure
in
a
in
the
path
label
in
the
data
message,
you
can
insert
that
into
a
future
interest
message
and
that
interest
message
will
be
routed
over
the
path
that
was
previously
discovered.
A
So
you
do
a
longest
prefix
match
within
that
fib
entry.
There'll,
be
this
matching
of
the
next
hop
code
point
and
that
will
cause
the
selection
of
the
of
the
outbound
face
under
that
fib
entry
to
be
selected
for
that
for
that
data
back
for
that
subsequent
interest
packet.
So
it's
relatively
straightforward.
Some
obvious
advantage
of
this.
An
ICN
ping
application
can
reliably
measure
the
path
RTT
by
sending
multiple
interests
over
the
same
path
and
then
met
you
can
measure
the
path.
A
traceroute
application
can
iteratively
discover
multiple
network
paths.
A
Now
one
thing
I'll
bring
up
really
briefly.
Here
is
there's
partial
overlap
here
with
the
existing
work
in
ic
NRG
on
on
CCN
info,
which
has
a
separate
forwarding
strategy
for
doing
path,
exploration
and
discovering
all
the
possible
paths
that
the
various
fibs
contain.
Returning
those
and
then
being
able
to
diagnose,
that's
clearly
more
powerful.
It's
higher,
potentially
higher
overhead,
has
some
other
different
characteristics,
so
we're
not
necessarily
proposing
that
this
is
an
alternative
to
CC
n
info.
It's
an
another
tool
in
the
bag
of
tricks.
A
Okay,
so
the
consumer,
multipath
congestion,
control,
algorithms
can
discover
and
distribute
load
across
the
pads
and
as
a
sort
of
side
benefit.
If
you
suspect
you're
getting
a
content
poisoning
attack
because
of
the
way
the
particular
path
that
was
selected
by
the
forwarders
passes
through
a
poisoned
cache,
you
can
select
a
different
path
with
routes
around
potentially
poison
caches
and
then
a
brief
mention
you
can
do
traffic
engineering
solutions
if
you
happen
to
believe
the
Sdn
religion
for
pushing
everything
into
routers
via
third
parties.
A
So
what
are
the
complications
here?
Well,
the
clear
complication
is:
how
do
you
invalidate
paths
if
the
routes
are
changing
right
so
with
path?
Steering
you
still
use
the
prefix
match
to
find
the
set
of
next
hops,
for
which
the
path
next
hop
is
chosen.
So
the
only
time
you
need
to
invalidate
a
path
is
when
you
go
look
up
the
next
hop
in
the
fib
entry
for
that
for
the
forwarder
and
the
next
hop
label
that
you
had.
A
A
Okay
and
the
forward
path,
the
interest
is
intact,
so
the
packet
encoding
is
a
very
careful
trade-off
between
how
big
this
has
to
be
to
handle
longish
pads
more
than
a
few
hops
and
how
much
compute
you
need
to
do
to
actually
select
a
particular
path
in
terms
of
computational
overhead.
So
we
add
a
new
hop
by
hop
header,
which
is
the
path
label
and
the
individual
next
hops
are
is,
is
constructed
as
a
bitmap
and
that
are
actually
12
bit
chunks.
A
Here's
a
quick
picture
of
the
data
structure
which
contains
the
path
label,
some
flags
which
is
like
discovery
mode
and
strict
the
vs.
fallback
mode
and
something
called
the
path
label
hop
count,
and
this
is
an
important
thing
that
those
two
things
one
is
it
tells
each
forward
or
we're
in
the
path
label
bitmap
to
extract
the
particular
next
hop
label.
You
want
the
other
nice
property
it
has
is,
since
this
will
count
opposite
the
hop
limit
you
can
detect
when
you
chop,
traverse
forwarders
that
don't
support
the
path.
Labeling
needed
me
to
finish
up.
A
Ok,
I'm
almost
done
so
we
use
12
bit
things
which
is
a
trade-off
between
how
quickly
you
need
to
in
Val
eight
pads
in
order
to
not
have
aliasing
and
how
much
space
you
need
to
do
so.
Some
security
implications,
real
quick.
Clearly,
consumers
can
try
to
maliciously
Mystere
interests,
because
if
you
use
a
12-bit
next
stop
label,
you
only
need
two
to
the
twelfth
interests
in
order
to
explore
and
find
a
valid
on
next
top
label
to
try
to
exploit
so
to
mitigate
this.
A
You
just
periodically
update
the
next
stop
labels,
the
minute
maximize
the
lifetime
of
pads
and
the
path
label
can
also
be
encrypted
hop-by-hop.
We
have
a
hop
by
hop
encryption
capability
that
doesn't
require
any
sharing
so
that
you
don't
leak
topological
information
to
consumers
about
all
the
forward
is
on
a
pack
there's
some
cache
pollution
potentials
where
a
consumer
and
a
producer
can
can
collude
to
cause
poisoned
information
to
be
returned
over
certain
paths.
A
I
I
Just
looking
at
the
data
structures
we
have
in
the
interest,
we
have
two
timers.
One
is
the
signature
time
of
the
interest
message
in
the
case
and
interest
is
signed
and
we
have
the
interest
lifetime
message
which
isn't
a
relative
timer
and
that
describes
how
long
an
interest
should
be
should
be
maintained
in
the
in
the
pit
at
the
others.
At
the
data
sites
we
have
even
three
timers.
I
One
is
again
the
signature
time,
the
the
time
when
the
data
was
signed,
and
we
have
two
others,
two
other
absolute
timers,
the
expiration
time
and
the
recommended
cache
time.
Those
are
also
absolute
timers,
whereas
the
interest
lifetime
is
a
relative
time
and
the
idea
is
to
to
do
better
on
compressing
these
timers.
We
had
the
proposal
in
the
ICN
lopen
draft,
but
the
problem
is:
if
we
compress
times
as
I,
will
just
discuss
here,
then
we
can't
recover
in
all
cases
the
original
times,
and
now
the
the
idea
was
okay.
I
So
we've
looked
at
the
the
current
represent
representation
of
timers.
We
have
the
relative
times
and
the
absolute
times
the
relative
times
are
our
time
Delta's
in
milliseconds,
and
they
can
be,
can
be
one
to
several
octet,
so
they
are
flexible.
The
absolute
times
are
inflexible
in
the
sense
that
they
are
NTP.
Time
stems
and
have
have
eight
eight
octaves
in
length.
So
that's
actually
a
pretty
large
number
oops.
I
The
idea
discussed
already
along
in
in
the
ICN
lopen
draft
and
which
is
actually
harvested
from
an
from
the
IP
MANET
world
from
RC
50
497,
is
to
represent
x
in
a
logarithmic
scale,
so
that
you
actually
can
express
them
in
one
to
two.
Octet
is
a
relatively
large
range
of
times,
but
with
different
granularity
for
small
values,
you
have
a
fine
granularity
for
larger
values.
You
have
a
larger,
coarse,
grained
granularity.
I
I
I
Know
you
build
timer
values,
as
you
see
the
equation,
they're
constructed
as
as
the
mantissa
times
times
the
exponent
2
to
the
B
times
the
value
of
C
that
actually
fixes
the
range.
So
you
have
to.
We
have
to
fix
three
values:
a
the
length
of
the
exponent,
the
links
of
the
mantissa
and
this
C
value
here,
the
examples
for
a
for
length,
a
equal
to
3,
B
to
5
and
C
1001
over
1,024,
and
this
those
are
actually
the
numbers
taken
from
from
the
RC
previous
mentioned.
I
I
So
looking
at
the
timers
again,
we
have
the
absolute
times
and
that
the
absolute
times
are
difficult
to
to
compress
as
such
as
Delta
times.
Look
at
looking
at
the
signature
timers.
These
signatures
can
be
in
the
past,
so
you
you
can't
express
them
as
Delta's.
They
can
be
far
in
the
past.
They
can
have.
I
I
Those
are
also
encoded
as
absolute
values
in
the
in
the
CCN
expect,
but
could
be
actually
changed
to
two
like
taking
the
signature
time
as
the
baseline
and
then
using
doubters
to
express
the
exploration
time
in
the
recommended
cache
times,
and
that
will
be
a
discussion
we
would
like
to
to
have
with
you
or
have
on
the
list
whether
this
should
be
done
or
shouldn't.
I
would
actually
change
to
two
other
timer
values
into
much
shorter,
shorter,
encodings
yeah.
I
What
will
be
the
next
steps?
Well,
we
would
probably
investigate
a
little
bit
about
the
ranges
that
are
that
are
appropriate
for
for
the
different
applications
in
the
different
Delta
timers,
for
instance,
the
the
values
I
showed.
You
would
be
very
nice
for
interest
lifetime
I
believe,
but
they
may
not
be
appropriate
for
the
Delta
x
in
the
in
the
object,
lifetime,
yeah
and
otherwise,
I
guess.
I
D
You
Thomas
right,
so
that
was
the
important
information
at
the
end.
So
the
whole
idea
of
this
work
is
what
to
unify
the
time
compression
that
has
been
done
in
the
say,
IOT
space
before
and
say
possibly
update
the
CC
Nexen
specs
with
that,
and
so
that's
why
I
think
it's
important
to
for
people
to
review
this,
and
basically
here
tell
us
what
you
think
about
it.
One.
I
Thing
I
forgot
to
mention,
as
some
of
these
timers
are
part
of
the
signature
and
the
of
the
envelope
that
is
signed.
So
if
we
cannot
recover
this
the
timers
under
the
compression,
we
cannot
actually
compress
these
timers
without
destroying
the
signature
and
right
one
of
the
major
motivations
actually
yeah.
Sorry
I
forgot
this.
Okay,
the
other
is
just
a
very
brief
recap
or
just
a
very
brief
addition
to
what
happened
to
this
ICN
lopen
draught.
I
This
is
actually
has
been
presented
several
times
many
times
he
had
discussed
in
quite
detail
has
matured
and
about
ready
ready
to
go.
So
what
did
we
do?
We
had
several
reviews
also
on
the
mailing
list,
in
particular
by
one
chef,
mr.
Shi
F
trouble
in
pronouncing
so
yet
also
commended
on
several
things,
the
last
way
or
in
typos
and
on
operators.
I
What
we
also
did
is
we
harmonize
the
the
ICN
open
draft
with
the
teed
time,
ta
Vedra
draft
I
just
presented
a
second
ago,
so
that
those
fit
together
nicely
and
the
second
discussion
question
we
had
from
the
montréal
meeting
was
the
this
draft
originally
concentrated
on
efficiently
compressing
like
the
default
standard
cases
and
as
a
default
standard
cases
case,
we
identified
the
generic
name.
Components
for
named
object
for
the
names
of
object,
and
this
is
actually
what
is
shown
here.
So
what
we
do
is
we
have
a
recompress.
I
Extensible
thing,
so
the
idea
is
we
we
have
in
the
ICN
lobe,
and
we
have
you
want
to
interrupt
or
no
keep
going
in
the
eyes
are
opened.
We
have
dispatch
fields
and
these
dispatch
fields
can
be
extended,
they
have
extension,
extension
fields
and
the
ideas
to
define
and
such
an
extension
groups
such
an
extension
field
here
that
has
has
to
to
further
values,
to
to
incorporate
flexibly
name
components,
for
instance,
defined
in
dictionaries
or
in
something
else
that
copy
could
be
actually
special
specified
in
future.
I
A
Now
or
just
make,
the
comment
now
go
back
one
well.
This
is
basically
and
so
wrong.
This
seems
to
me
anyway,
to
be
a
poster
child
example
of
something
that
you
could
use
manifest
metadata
for.
A
I
D
Thank
You
Thomas,
so
just
on
the
last
point,
I
mean
we
have
to
find
a
way
to
deal
with
these
dependencies
meaningful
way.
So,
if
you
think
about
the
manifest
approach,
I
hope
this
wouldn't
affect
this
current
draft.
So
we
don't
have
to
wait
with
this
draft
to
figure
out
a
way
to
use
manifest
for
the
dictionary
description
right,
yeah.
I
D
Right
so
I
think
from
our
perspective,
it's
good
time
to
last
call
this
draft
now
and
it
shouldn't
keep
us
from
thinking
about
the
manifest
and
idea
later.
Okay.
So
we
will
issue
a
last
call
on
the
main
list.
Please
please,
please
have
a
second
look
at
the
at
the
draft
so-
and
this
is
also
going
to
be
an
important
specification
for
ICN
in
IOT.
So
we
want
to
publish
this
because
it's
important
for
building
IOT
solutions
was
ICN.
D
E
Right,
thank
you.
This
would
be
a
fairly
short
update
so
giving
this
update
on
behalf
of
my
co-authors
just
militant
chain
you
to
the
concept.
If
this,
if
you're
new,
to
ipok
its
idea
to
use
CC
n
as
the
forwarding
plane
for
a
mobile
core
network,
so
not
running
CC
n
on
top
of
IP
on
top
of
LTE,
think
a
PC,
a
things
like
that
essentially
use
CC
annette
as
as
that
forwarding
plane
and
then
run
existing
IP
services.
E
On
top
of
that,
and
so
I
his
IP
over
CCM
and
the
idea
is
that
it
would
replace
LTE
PC
and
the
gtp
tunnels
that
are
used
for
that
for
mobility
and
then
once
you've
got
that
forwarding
plane
of
CCN
in
your
mobile
network,
then
you
can
now
deploy
native
CCN
applications
and
get
all
the
benefits
that
you
know
in
network
caching
and
and
mobility
that
comes
along
with
that
status.
I
did
a
fairly
detailed
presentation
on
the
protocol
at
the
interim
before
the
Montreal,
IC
energy
or
IETF
meeting
link.
E
There's
only
one
open
item
that
I'm
aware
of
maybe
I'm
forgetting
something,
and
that
was
a
comment
from
toss
about
consider,
including
a
comparative
discussion
of
mobile
ipv6
and
multicast
mobility.
We've
had
some
offline
exchange
via
email
and
plan
to
think
up
sometime
this
week
to
discuss
that
yeah.
I
I,
just
Canet
I
mean
that
the
point
is
you're
you're
trading,
CCN
mobility
versus
IP
mobility
and
you're
in
a
research
group,
so
it
makes
sense
actually
to
position
between
the
two
and
I
mean.
Multicast
is
interesting
in
this
case,
because
I'm
CC
n
is
also
a
reverse
path.
Forwarding,
so
multicast
ability
is
actually
the
direct
counterpart
of
your
approach.
That's
at
least
for
a
scientific
completeness,
I
guess
yeah.
E
I
have
no
objection
to
doing
I
think
that
we
could
come
up
with
a
short
subsection
in
the
document
that
that
discusses
that
I
wouldn't
want
that
to
become.
You
know
the
book,
the
document,
but
you
certainly
cover
that,
and
and
with
your
help,
Thomas
says
I'm,
not
an
expert
on
multicast
mobility
or
actually
on
mobile
ipv6,
either
appreciate
the
offer
to
help.
D
H
So
I
just
want
to
confirm
about
the
deployment
strategy
in
like
I
see
any
in
5jc.
This
is
not
relevant,
but
I
won't
confirm.
This
is
like
the
first
mode
or
second
mode
of
its
overlay
or
underline,
because
we
were
quite
confused
about
that
definition.
So
it
is
part
of
this
study
in
the
deployment
consideration
or
not,
which
one
so.
E
I
puck
is
discussed
in
the
CC
on
over
PI
G,
Draft
and
I'm,
not
remembering
exactly
the
terminology.
That's
used
there,
but
essentially
it
is
using
CC
n
forwarding
natively
over
so
Ethernet
row
over
another
layer
to
technology
in
the
core
network
and
not
with
IP,
underneath
it
and
then
running
native
CC,
n
applications
or
or
IP
applications
in
particular,
in
this
case
IP
applications
on
top
of
CC
NS.
That
is,
that
foreign
plane.
D
E
D
B
L
Hello,
everyone.
This
is
actually
it's
a
version,
zero,
zero,
but
actually
it's
a
revised,
never
Eiland
draft.
We
presented
it
in
NASA
a
meeting
moon
chill
it's
about
the
so
when
we
play
the
other
one,
the
IPO
risin
over
fabula
so
rana
the
presented.
The
idea
I'm
pretending
now
is
we
call
the
Internet
services
over
ice
and
over
fifty
lon
in
terms
of
icing
world
fevzi.
Actually
we
have
another
related
were
on
our
G
draft
labeling
ice
and
over
so
gpb's
fifty
next
generation
core
network.
L
So
inland
draft
talking
about
the
how
to
extend
the
5g
control
plan
and
data
plan
in
order
to
support
us
in
over
forty,
a
network
or
4G
system,
so
the
current
one
is
kind
of
for
Exantus
stabbed
near
the
feather
is
think
about.
If
we
have
the
HTTP
HTTP,
for
example,
internet
services
such
as
HTTP,
if
you
want
to
ride
over
the
ice
N
and
also
here,
will
focus
on
fabula,
because
fabula
is,
can
a
vertical
our
virtual
land,
a
group
of
some
years.
So
we
want
to
look
at
this.
How
to
support
this.
L
So,
basically,
in
this
draft
we
presented
to
use
kisses
whines
about
the
control,
a
plan
because
contraband,
you
know
they
felt
control
plane.
The
use
HTTP
protocol
is
kind
of
interest
services.
The
signal
use
case
we
are
including
the
job
is
Kali
the
HD
streaming,
so
we
introduced
a
faulty
line
architecture
according
to
the
education
and,
of
course,
it's
ongoing
studio
in
CPP.
L
The
architectural
Falchi
next
generation
corner
work
in
support
in
order
to
support
our
Jalan
most
more
counter
contents
in
in
this
trophy's,
the
making
demanded
also
the
architecture
in
order
to
supports
the
interest
services
over
a
single
on,
for
example,
we
need
to
introduce
the
Ison
API
in
order
to
supply
upper
layers,
for
example,
sue
this
SNP
I,
the
upper
layer
such
as
the
UTP.
They
can
call
this
API
to
send
the
packet
to
the
data
to
the
eyes
in
there.
L
As
a
part
of
that,
we
also
consider
the
service
proxy
operations
in
order
to
support
the
legacy
devices,
which
means
it
only
supports
the
ice
and
natively.
So
for
those
kind
of
legacy
devices
they
talk
to
the
service
proxy
operations
from
the
air,
the
service
proxy
proxy.
It
can
translate
the
LexA
device
operations
to
the
you
know,
with
proposal
e
at
the
icing
or
5g
lon,
and
also
part
of
that
is,
we
need
to
have
a
component.
L
Also.
We
also
consider
the
dual
stack
device
reports,
because
in
some
cases
the
device
supports
commission
on
IP
protocol
also
support
the
that
we
propose
here
the
inter
services
or
icing
over
50
lon,
and
then
the
deployment
considerations.
We
basically
took
the
guidelines
from
the
research
group
document
about
the
ice
and
deployment
guidelines
specifically
to
our
commands
were
taken,
is
Ison
and
the
underlay,
and
also
since
we
are
here,
consider
on
how
to
do
that.
50
lon.
L
So
we
also
take
the
Ison
and
they
slice,
because-
and
you
know
the
the
4G
system-
suppose
Network
slice
and
nicely
and
that's
kind
of
being
content,
including
the
it's
chopped.
So
we
received
a
few
comments
back
in
July
in
münchen
meeting,
so
we
made
some
those
two
major
changes,
number
ones
and
a
paragraph
to
describe
the
how
to
realize
how
to
achieve
this
interest
sources
over
a
single
file
on
under
you
know
some
other
transport
networks.
L
We
talk
about
the
Easton
transport
networks
in
Section,
four
to
a
further
one
here,
the
father
to
we
and
a
paragraph
to
clarify.
We
also
can
use
a
bigger
traffic
engineering
and
then
you
know
one
type
of
other
transfer
networks
in
order
to
support
the
inner
source
is
authorizing
over
fifty
line.
So
essential
idea
was
to
basically
say
the
beer
of
trafficking
in
your
controller
is
quite
similar
to
the
HD
in
control
controller.
So
we
can
base
into
this
inner
sin
configure
some.
L
You
know
they
the
positions
and
a
bit
strings
in
order
to
utilize.
The
pass
is
forwarding
again
from,
for
example,
from
one
UPA
for
a
lot
of
UPF
a
lot
of
changes.
We
replace
the
IP
services
with
the
Internet
services.
One
of
the
reasons
also
the
computer
by
the
pier,
but
the
folks
not
meeting
in
montréal,
is
I
think
the
the
inter
services
in
the
more
meaningful
and
more
consistent
with
the
content,
with
the
motivation
covered
by
this
draft,
that's
kind
of
for
to
mere
changes
that
we
made
so
far.
L
Of
course,
there
are
some
other
of
future
updates.
We
plan
to
do.
For
example,
currently
we
have
two
components:
the
flow
management
and
also
the
mobility
handling,
so
those
two
components
currently
is
its
kibriya.
We
plan
to
you,
you
know,
under
the
future
updates,
for
example,
for
the
flow
management
we
plan
to
describe
how
Internet
transactions
are
mapped
onto
single
transaction,
our
relation
with
the
joint
flow
control
across
all
transactions.
I'm,
not
going
to
repeat
this,
you
know
the
perimeter
thinking
or
the
promise
to
Adam.
L
You
know
more
description
and
more
content
or
run
those
two
components.
Flow
management
and
mobility
are
handling.
Another
thing.
What
you
plan
to
do
is
in
the
we,
we
plan
to
show
a
demo
in
next
meeting
107
in
my
cooler,
so
the
plan
was
to
showcase
a
realization
of
Internet
services
over
a
little
too,
for
example,
HD
and
transfer
network
using
the
Ison
based
routing.
Of
course,
we
mention
one
for
common
component.
We
proposes
service
proxy,
so
the
term
knows.
L
A
All
right
so
quick
question
you're
running
you're
running
HTTP
on
top
of
an
ICN
protocol.
Yes,
so
what
spec
defines
that
mapping.
A
L
A
waffle
idea
was
to
the
icin,
because
I
should
be
a
nice,
and
so
basically
you
were
asking
how
to
map
right,
because
both
layers
or
different,
a
once
idea,
is
working
to
the
icing
is
going
to
be
extended
by
providing
certain
API
to
the
upper
layers.
For
example,
two
or
four
api's
were
described
in
the
draft
is
send
a
packet
and
a
collection.
So
using
this,
this
assume
this
ice
and
API
is
there
so
in
order
to
use
HTTP
in
a
packet
from
a
source
to
a
desolation.
L
The
sender,
the
HIV
layer
and
the
sender
is
going
to
call
the
API
and
talk
call.
This
isin
API
talked
to
the
ice
in
a
layer
and
to
the
cinder
side
and
then
send
a
packet
down
to
as
in
there
from
there.
The
icing-
and
there
is
going
to
you,
know,
use
the
ice
and
make
known
class
the
past
based
forwarding
in
order
to
send
a
this
ice
in
packet.
L
Of
course,
a
payload
that
will
be
the
HTTP
packet,
sending
this
ice
and
packet
tail
packet,
for
example,
from
one
you
piaf
for
other
UPF
and
eventually
the
destination
UPF
is
going
to,
like
you
know,
recover
the
original
HIV
packet
and
send
it
to
the
eventually
sent
to
UE,
for
example.
So,
basically,
to
answer
your
question,
you
founder
understood
correctly,
there
will
be
something
we
need
to
propose
and
we
need
to
extend
the
ice
and
near,
for
example,
here
is
a
by
providing
certain
api's
to
the
upper
layer
like
HTTP.
Ok,.
A
D
K
K
Yes,
I'm
from
a
ICT
Japan
and
today,
I
will
present
updates
on
the
post,
after
hope,
I
hope
I
was
in
the
game
in
containers
in
Germany
Turkey
and
the
name
written
on
it.
Working
last
time,
we
at
ITF
1:05
I
presented
that
basically
design
of
the
Hoppus
theater
market,
some
feedback
on
the
motivation,
the
initial
chasm
and
trust
establishment,
and
so
this
time
I
give
some
updated
for
the
draft
and
this
the
updated
my
own
version
zero
one
easier
to
the
included
the
motivation,
clarification
and
the
motifs
collision
on
the
initial
trust
establishment.
K
First.
This
is
simple
introduction
on
the
continental
network
and
numerator
networking,
and
there
is
a
consumer,
see
another
sign
out
as
an
entry
say
and
the
end
user,
where
people
would
either
hope
I
hope,
finally
reaches
the
copy
cooter
and
then
copy
hood,
a
reprise
with
date
heart
and
in
this
procedure
the
three
entities
involved:
one
is
the
publisher
consumer
and
a
copy
cooler.
K
K
Is
the
attacker
can
impersonate
the
consumer
to
request
the
data
much
existing
worker
faxing
on
the
restricting
the
inches
at
any
rate,
so
it
is
ratified.
The
treaty
is
necessary
for
the
copy
Hooters
to
use
service
to
authenticate
increased
packet.
This
is
a
teacher.
Some
teachers
of
the
condom
poisoning
a
tackle
the
attacker
who
drew
the
providers
effect
of
gravity
corrupted
the
data.
Then
the
consumers
in
out
of
the
interest
the
end,
the
data
faculty
that
we
will
be
prior
to
the
consumer
one.
K
So
this
is
a
proponent
one.
Yet
a
consumer
always
reaches
the
wrong
data,
because
the
intermediator
rota
to
Lotter
detected
the
benefit
of
the
data.
So
after
the
SS,
the
data
was
attached.
Hope,
I
hope
this
data.
Obviously
the
consumer
tool
I
send
out
of
the
interest,
energy
and
also
the
wrong
packet.
Don't
did
how
we
appear
applied
to
the
consumer
to
from
the
cash
the
memory
of
the
closer
rotor.
So
the
second
problem
is
that
the
fact
era
our
Futurecast,
which
pollutes
the
rotors
as
a
bit
Russo
sprays.
K
So
we
identified
two
requirements
here.
The
first
requirement
is,
after
all,
the
rotors
and,
on
the
past,
need
to
verify
the
data
before
cashing,
but
we
do
not
to
avoid
a
heavy
and
complex
tasks
a
and
the
senior
management
cysteine,
the
second
commonly
that's,
the
consumer,
need
to
verify
the
computer
and
a
data
which
were
pass
to
identify
the
some
the
polluted
auntie's.
K
Of
course,
there
are
some
much
existing
worker-owned
focusing
on
the
road
and
limiting
so.
But
finally,
there
are
still
some
malicious
interest
can
still
reach
the
copy
hood,
so
the
cop
car
still
need
to
provide
some
priority,
the
data,
so
it
is
not
audio
solution.
So
the
what
we
identified
two
requirements
here,
also
the
first
one
is
there
to
the
not
the
hopper
rotor
needs
to
any
eliminated
changes
chance
of
the
increase
of
flooding
attack.
K
So
it
is
data
orange
the
mechanism
and
that
doesn't
necessarily
rely
on
the
external
servers,
but
it
doesn't
exclude
the
use
of
the
certificate
authority
as
it
is
the
contributor
to
the
suspension
model.
We
propose
during
our
hoppers,
but
here
I'll,
introduce
our
modification
and,
from
here
is
introduction
on
the
initial
trust
established
and
for
the
use
of
the
CEO
certify
Pournami.
We
use
the
hash
of
the
public
key
to
impurity
into
the
name
to
prevent
the
stoning
and
the
progeny
of
the
existing
names.
K
The
solution
here
we
use
the
public
key
to
is
embedded
into
the
name
to
inability
to
be
sealed.
Certifiable,
the
name
owner
can
use
the
corresponding
private
key
to
ascertain
its
ownership,
to
sign
the
message
sent
from
this
entity
with
an
M,
but
we
needed
to
notice
that
an
attacker
can
create
a
you
name
from
an
arbitrary
public
key.
However,
the
attack
cannot
cannot
impersonate
somebody
else's
name.
K
K
B
B
D
So
we
had
a
couple
of
additional
society
events
on
ICN
this
time,
so
one
was
a
heck
of
an
activity
by
energy,
T
and
colleagues
on
the
C
for
implementation.
Another
one
was
a
Q
s,
discussion
that
wasn't
really
at
the
hackathon,
but
that
took
place
on
non
Sunday,
which
is
really
like
to
update
you
on
those.
E
F
So
simple
is
open
source
software,
a
software
platform
in
a
weighing
machine
communication
which
complete
with
two
RF
seeds.
The
main
objective
is
to
enhance
to
this
several
functionality
in
the
context
of
the
two
proposed
draft
one
is
she's
aiming
for
and
the
secondary
is
network
coding
position
and
the
nvm
so
more
precisely
in
the
second.
F
D
G
A
I'm
just
going
to
sit
down
so,
as
you
know,
we
announced
a
sort
of
an
informal
side
meeting
about
what
I
see
and
RG
might
be
doing
about
quality
of
service
stuff
going
forward,
and
we
didn't
get
a
lot
of
attendance,
but
the
attendance
we
got
was
some
pretty
good
engagement
from
variety
of
people,
so
I'm
just
going
to
quickly
run
through
some
of
the
things
we
talked
about,
and
our
idea
is
if
people
are
interested
in
they'll
take
a
poll
at
the
end
whether
we
should
have
a
follow
on
inform
we'll
get
together
later
this
week
we
would
be
targeting
the
early
the
morning
open
slots
for
Thursday
in
order
to
get
together.
A
A
Do
we
want
to
actually
try
to
come
up
with
a
defined
architecture
for
doing
quality
of
service
in
ICN?
How
much
should
are
the
experience
we
have
doing?
Quality
of
service
work
in
the
IOT
and
low
end
low
bandwidth,
low
capability
network
environment
use
that
to
drive
some
of
our
quality
of
service
work,
because
that's
where
people
have
spent
a
lot
of
time
sort
of
like
the
high
level
question
is:
who
cares
if
people
don't
care,
there's
not
going
to
be
enough
participation
to
really
make
some
progress.
A
Some
more
detailed
questions
like
do.
We
want
to
mirror
the
kind
of
class-based
quality
of
service
capabilities
that
diffserv
provides,
or
do
we
want
to
also
accommodate
flow
style,
quality
of
service
capabilities,
similar
to
what
M
serve,
does
and
who
will
implement
and
measure
whatever
we
come
up
with,
because
if
we
just
write
specs
that'll,
be
you
know
fun
for
putting
things
together,
but
not
really
affect
things.
So
we
have
a
whole
bunch
of
notes.
I'm
not
going
to
go
through
all
the
notes.
A
A
Some
more,
for
example,
there's
already
some
confusion
about
why
some
people
think
flowing
congestion
control
are
sort
of
very
closely
related
to
one
another
and
other
people
think
those
are
sort
of
separate
topics,
and
you
want
to
talk
about
flow
control
and
congestion
control
separately
and
whether
QoS
treatments
should
be
bound
together
with
flow
classification
or
not.
So
there
is
some
sort
of
general
talk
about
about
this
quality
of
service
architecture,
polemic
that
that
then
I
published
a
few
months
ago,
and
what
we
should
do
about
that.
A
A
A
A
D
D
So,
thank
you
also.
As
you
have
seen,
I
mean
this
was
a
pretty
packed
agenda.
So
thank
you
all
for
your
contribution
and
your
discipline
so
next
time
we'll
have
to
find
a
way
how
to
deal
with
that,
so
we
are
likely
going
for
another
full
day
interim.
On
the
other
hand,
we
would
also
like
to
encourage
more
hackathon
work,
and
so
this
what
kind
of
conflict
was
the
weekend
before?
So
maybe
let's
discuss
this
a
little
bit.