►
From YouTube: ICNRG Interim Meeting, 2020-04-20
Description
ICNRG Interim Meeting, 2020-04-20
D
B
E
E
E
E
F
E
F
So
the
this
is
the
summary
of
changes.
The
for
sewing
is
a
no
logic
for
year.
Actually
we
previously
assumed
that
no
lighting
fire
can
be
your
IP
address,
but
according
to
the
server
comments,
we
changed
lots
of
identifiers
assumption
from
IP
address
to
no
name
like
a
Content
naming.
The
second
one
is
about
information
reported
in
sub
blocks,
in
fact,
is
not
alleged
or
louder
to
fill.
F
F
Okay,
okay,
so
so
anyway,
so
we
need
to
clarify
that
seasoning
for
allows
to
emit
complex
function,
implementations
and
the
third
one
is
a
regular
interest
and
a
full
discovery
request.
So
we
also
qualify
the
rigger
Eric
weather,
which
is
default
and
full
discovery
request,
which
is
optional,
and
there
are
several
interior
collection
and
the
improvement.
F
So
the
first
one,
not
identifier,
the
previous
three,
are
in
the
section
three
one.
Two.
Regarding
report
probe,
we
said
this
field
specify
the
seasonal
user
without
identifier,
exam
pipe
EP
folders
of
the
incoming
interface
on
which
packet
from
the
publisher
are
expected
to
the
LIBOR
or
sales,
if
unknown
and
numbered.
Now
we
change
this
statement,
this
progress
to
the
following
statement.
This
field
specifies
a
node
identifier.
We
have
no
name
or
hash
basic
search,
satifying
name
number
nine
is
actually
the
hosts
draft
or
the
Rif
unknown.
F
The
next
one
is
a
information
reported
in
sub
lock
in
a
section
3
2,
1
1.
We
clearly
said
note
that
some
louder's
may
not
be
capable
of
supporting
the
following
bodies
such
as
bla
bla,
bla
bla,
as
shown
in
Figure
15
v.
Every
teacher
actually
chose
the
form
of
message
format
every
about
repress
a
block
and
actually
some
as
I
said
some
louder
does
not
have
the
capability
of
reporting,
these
counters
or
values,
and
so
on,
or
some
letters
doesn't
want,
disclose
such
kind
of
information.
So
these
bodies
therefore
may
be
returned
with
no.
F
We
expressed
three
mention
about
it
here
and
rigor,
and
the
food
discovery
quest
so
both
different,
and
we
already
discuss
about
the
some
supportive
mechanism,
especially
for
fruit
discovery,
requests,
and
we
want
to
keep
the
food
radical
through
discovery
requests
as
well,
but
that
they
folder
is
a
legal
request
for
the
regular
requests
in
the
regular
egress.
A
lot
of
order,
request
message
upstream
towards
the
publisher
or
cashing
louder,
based
on
the
FIBA
entry',
like
the
whole
energy
interests,
data
communications.
So
if
the
other
dick.
F
F
Although
Megan
is
just,
they
can
work
as
a
ligature
Inter's
to
data
communications,
so
they
don't
need
to
have
some
special
behavior
for
liquids
and
reply
communication.
But
if
the
food
recovery
request
is
supported
by
louder,
then
he
needs
to
support
various
additional
functions
comparing
with
compared
with
alecko
denali
fishy
and
of
water.
So
we
express
three
say,
unlike
the
ordinary
interest
data
communications
incision,
if
flatus,
that
accepted
for
discovery
request,
receives
a
full
discovering.
Oh
sorry,
after
the
40
square
request,
the
loudest
shown
so
I
need
to
modify
the
statement.
F
The
last
should
not
remove
the
pit
entry
created
by
the
for
discovery
request
until
sufficient
if
repair
timeout
pyro
expires.
So
this
is
a
common
is.
This
is
not
a
common
behavior
so
for
the
full
recovered
discovery
request.
Data
must
support
this
special
behavior,
but
note
that
for
discovery,
request
itself
is
an
optional
implementation
of
option
info.
It
may
not
be
implemented
allowed,
even
if
it
is
in
permitted,
a
lotta
may
not
accept
a
full
discovery,
requests
from
non-validated
system
users
or
louder's
or
because
of
its
policy
evil
out.
F
That
does
not
accept
a
full
discovery
request.
It
will
reject
the
friggin
discovery
request,
as
described
in
section,
so
you
want
one
and
the
routers
that
enable
for
the
discovery
request
marry
me
to
reprise,
as
described
in
section
and
of
a
husband.
So
we
explicitly
mention
the
difference
of
the
Granger
is
a
regular
question
for
you
quality
quest.
F
F
B
Yeah
because
she
may
be
one
of
the
things
we
could
work
on
together
is
to
for
both
the
CCM
info
draft
and
then
the
payment
ratio
dress
cross-reference.
You
know,
second,
sir,
to.
B
B
E
E
E
B
E
G
G
Right
thank
turkey
day
for
introduction,
so
my
name
is
Jenga
and
I'm.
We
have
like
the
seventh
iteration
of
the
ICN
loop
and
raft,
and
here
are
basically
the
we
have
like
four
little
amendments
to
the
rough
from
the
six
of
the
iteration
to
the
seventh
iteration.
They
are
kind
of
small.
So
thanks
to
Collin,
who
pointed
out
that
the
oversee
57:43
actually
like
demands
to
have
like
various
notices
in
the
abstract,
an
introduction
to
identify
this
draft
as
a
product
of
the
rdf.
G
So
we
added
a
couple
of
notices
there
and
then
in
section
four
one
one
which
basically
arrives
how
we
allow
extensions
for
this
patches,
for
example,
and
future
rafts,
and
there
we
edit
a
paragraph
that
future
drafts
should
use.
The
structure
manifests
like,
for
example,
and
the
flick
draft,
and
we
also
put
a
link
to
the
flick
draft
here
for
the
exchange
of
configuration
parameters.
G
And
then
the
third
amendment
is
that
again.
Thanks
to
Colin,
we
edit,
like
some
information
about
which
experimental
evaluations
could
be
interesting
for
future
iterations
and
how
to
how
they
would
be
fruitful
for
the
ICL
open
work
and
how
to
advance.
In
this
direction,
so
we
edit
a
couple
of
paragraphs
there
and
then
last
but
not
least,
we
edit
eek
Ayana
consideration
section
and
we
basically
to
replace
all
and
editor
to
be
defined.
E
E
It's
great
to
hear
you
Engel
typing
psychically,
but
I
just
had
to
meet
you.
Sorry.
G
G
G
First,
there
we
had
a
virtual
bump
a
couple
of
weeks
ago
from
from
0
0
to
0
1,
and
we
had
a
major
change
in
section
4,
which
talks
about
how
we
do
the
Leppa
time
encoding
and
basically,
we
are
not
using
the
formula
that
is
based
on
the
I
Triple
E
754.
This
is
the
floating-point
specification
and
thanks
to
mark
for
the
sinned
I
created
a
couple
of
numbers
example:
values
to
see
how
which
values
we
get
from
the
compression
or
which
all
these
are
allowed
to
use.
G
This
is,
if
you
have
like
an
exponent
of
0,
we
use
the
upper
formula
and
if
you
have
an
exponent
out
of
the
blue
range
greater
than
0,
then
we
take
the
second
formula
and
the
idea
of
having
a
sub
normal
ranges
that
we
closely
the
gap
between
0
and
this
lowest
number
that
the
lower
formula
can.
Can
you
show
so
and
just
a
quick,
like
idea
of
why
we
took
these
configurations?
I
will
now
show
you
sample
configurations
and
we
will
see
how
they
perform
here
in
this
graph.
G
You
can
see
on
the
x-axis
the
time
code,
so
I
said
we
have
8
bits,
so
we
go
from
0
to
255
and
on
the
y-axis.
We
have
the
time
in
seconds.
So
if
you
look
at
the
configuration
three
five
zero,
which
means
we
have
three
bits
for
the
exponent
five
bits
for
the
mantissa
and
zero
is
a
bias
that
you
can
apply
to
the
formulas
to
to
the
values
itself,
and
in
this
configuration
we
can
see.
G
Okay,
we
have
the
red
dots
and
everything
like
between
the
red
dots
is
the
mantissa
or
the
precision
itself.
Every
time
we
encounter
a
dot,
we
have
the
next
mantissa
overall,
so
they
increase
the
exponent.
So
we
can
see
we
have
a
fairly
large
precision
here,
but
the
range
itself
is
really
low.
We
only
touch
close
to
1000
seconds
here,
which
is
not
enough
for
our
use
cases.
G
If
you
look
at
another
configuration
which
is
for
folder
view,
so
for
exponent
romantism,
we
can
see
that
the
precision
itself
is
half,
so
we
get
less
precision
but
the
range
then
it's
much
higher.
So
we
can
almost
get
10
to
the
power
5
seconds
here,
which
is
a
little
bit
more
than
a
day,
but
the
day
is
to
2
less
for
our
use
cases.
So
we
go
1
for
further
and
we
have
5
exponent
bits
and
3
multi-service.
G
You
can
see
that
precision
is
again
much
much
lower,
but
we
reach
in
this
case
a
huge
range
I
mean
this
is
basically
100
and
I.
Think
120
years
we
can
represent
this
configuration,
but
then
again
do
we
really
need
these
high
numbers
at
the
lower
end?
Maybe
we
should
concentrate
more
on
the
lower
end.
This
is
what
the
bias
is
doing.
If
you
apply
a
bias
of
minus
5.
This
means
we
divide
all
values
by
32.
We
you
can
see.
G
The
new
configuration
is
just
Y
shift
on
the
y
axis
down,
so
we
have
a
lower
range,
but
we
have
more
values
than
the
lower
end
we
can
use,
and
with
this
configuration
we
have
sub-second.
We
have
nearly
second
resolution
on
the
lower
ends
and
they
go
up
until
4
years.
We
can
reach
3
to
4
years
at
this
configuration
which
is
fairly
enough
for
interest
the
lifetimes
there
my
cache
lines,
so
there
was
another
update
in
the
draft
which
handles
the
protocol
integration.
So
we
have
this
time
the
compressed
time
coding.
G
But
how
do
we
use
this
in
the
sec
and
x
protocol
itself
and
we
said
that
we,
okay,
we
will
now
concentrate
in
this
draft,
only
prevent
of
the
interest
lifetime
in
the
recurrent
cache
time
and
which,
of
course
has
the
effect
that
the
RCT
who
represent
the
cache
time
is
currently.
It
is
an
absolute
timer.
Presentation
is
the
millisecond
since,
if
all
UNIX
epoch
and
it's
an
absolute
time-
but
if
you
want
to
use
this,
then
we
have
to
make
the
RCT
a
relative
timestamp.
G
So,
okay,
then
we
say
in
the
draft:
if
we
use
the
compressed
time,
then
our
city
becomes
a
relative
offset
and
currently
we
say
the
or
we
opt
for
the
solution
that
we
say
if
the
TLV
length
of
our
city
or
interest
left-
and
it's
one
now,
the
eight
bits,
then
we
say
we
use
the
compressed
time.
If
it's
anything
else,
then
we
use
the
same
like
specification
that
is
simply
C
Z
and
X
or
C
interior.
Our
alternative
integrations.
G
So,
instead
of
using
this
trick
above
setting
the
length
or
looking
at
the
length,
it
could
also
go
nested
tlvs,
which
obviously
have
the
like
the
pitfalls
that
you
introduce
overhead,
which
is
especially
in
the
IOT
case,
not
not
desirable,
or
we
could
define
new
top-level
tlvs,
which
have,
for
example,
interest
lifetime
compressed
cook,
okay,
that
kind
of
a
variant-
and
we
could
say
okay
instead
of
interest
lifetime
views,
interest
item
compressed
the
next
step
source
for
this
draft
are
to
further
discuss
these.
This
protocol
integration.
G
We
didn't
get
much
people
yet,
but
I
hope
that
people
have
more
ideas
on
what
would
be
the
best
way
to
integrate
this
code
point
or
the
the
compression
into
VC
cynics.
And
then
we
have
received
a
lot
of
feedback
from
mark
regarding
I'm
how
to
improve
proof,
the
draft
itself
and
he
recommended
that
we
put
a
like
all-time
village.
You
can
represent
this
configuration
to
the
appendix
at
the
table,
huge
table
and
also
to
provide
a
to
the
rhythm.
G
E
E
E
E
D
B
At
how
to
apply
ICN
in
our
environments,
other
than
simple
content
retrieval
and
whether
whether
these
types
of
protocols
actually
a
good
way
to
do
things
like
remote
procedure,
calls
and
sensor
networking
and,
in
other
other
sorts
of
more
computationally
focused
uses
than
just
simply
ask
for
some
dating
get
it
back
all
right.
How
do
I
Drive
this
there
we
go
so
the
way
the
trucks
going
to
go
is
talked
a
bit
about
the
motivations
or
why
one
might
want
to
use
multi
way.
B
Interaction
rather
than
just
simple
request
response
and
ICN
people
have
tried
to
do
this
in
the
past.
So
we'll
go
over
some
of
the
problems
with
the
approaches
people
have
tried
in
in
the
past,
then
I'll
introduce
this
design
that
we
have,
or
on
this
facility
called
reflects.
I've
already
talked
a
bit
about
the.
D
B
So
some
of
the
motivations
for
doing
this
is
is
applications
often
need
multi
way
hands-on,
so
things
like
any
type
of
RTC
or
remote
method
invocation.
Not
only
do
you
have
to
invoke
the
method,
but
somehow
the
arguments
have
to
make
it
from
the
client
to
the
server
you
have
to
have
some
way
to
perform:
Walter
ization,
all
of
the
clients
and
in
some
cases,
particularly
for
long-running
computation
like
to
separate
the
invocation
of
the
of
the
computation
from
the
return
of
results.
B
B
You,
okay!
Here
we
go
all
right:
they
they
move
themselves.
Amazingly,
all
right.
Four
sensors
and
actuators
on
we'd
really
like
to
see
a
way
the
data
can
be
thankful
and
there's,
rather
than
just
that,
so
that
sensors.
H
E
H
C
A
E
B
This
is
obviously
well
known,
because
transport
protocols
provide
three-way
handshake
and
other
protocols
like
first
a
double
medias
essence
need
multi
way.
Handshakes
on
next
slide.
E
B
Slide:
yeah,
okay,
so
people
have
tried
to
do
these
things
with
Indiana
and
CCN
in
the
past,
but
they're
too,
like
two
classes
of
problems,
one
of
which
is
that
people
wind
up
pushing
a
lot
of
data
in
interest
and
that
when
they
get
really
big,
you
might
even
need
fragmentation.
D
B
And
you
need
complicated
in
relation
protocols
and
it
also
messes
up
a
fairly
deep
assumption
in
the
existing
protocol.
That
interest
messages
are
small
and
congestion
control
protocols
that
people
have
designed
for
ICN
definitely
try
to
exploit
that
and
when
the
interest
messages
are
no
longer
small
congestion,
control
gets
dismissed
off.
The
second
is
that,
if
you're
going
to
put
important
data
in
easy,
you're
going
to
need
to
sign
them,
or
the
guy
on
the
other
end,
isn't
going
to
believe
the.
C
B
B
The
other
thing
is
given
that
the
protocols
are
our
independent,
two-way
exchanges
of
requests
and
response.
If
you
have
to
construct
a
multi
way,
exchange
out
of
the
independent
way,
exchanges
where
the
exchanges
is
going
in
one
of
the
exchanges
going
in
the
opposite
direction.
Now
somebody
who's
a
consumer.
D
B
The
assumption
that
consumers
have
certain
anonymity
properties
and
initiator
property
now
consumers
need
a
routable
name
prefix,
so
that
the
independent
interaction
coming
the
opposite
direction
can
reach
it.
This
has
some
a
number
of
bad
effects.
It
exposes
a
consumer
to
potentially
unwanted
traffic,
puts
burdens
on
routing
to
propagate
the
routable
name.
Prefix
far
enough
to
reach
this
an
in
mobile
environment,
where
ICN
has
been
touted
as
having
sort
of
like
natural
consumer
mobility,
but
as
complexities
which
you
need
producer
mobility.
B
To
be
operating
for
them
as
well,
another
problem,
of
course,
is
that
the
consumer
in
these
cases
gets
to
choose
the
name
it
wants
to
be
reached
by
and
as
we've
seen
in
many
cases
like
FTP
and
other
things
in
the
IP
world.
If
you
allow
a
user
to
assert
a
name
and
hand
it
to
a
second
party,
second,
that
second
party
to
use
that
name.
This
opens.
B
Reflection
attacks
where
a
consumer
can
cause
a
producer
to
mana
for
reflection,
attack
against
anybody
whose
name
they
can
construct
and
then,
lastly,
just
from
a
state
state
machine
point
of
view,
correlating.
B
Exchanges
can
be
very
error-prone
and,
as
we've
seen
in
the
case
of
key
exchange
protocols,
this
can
be,
of
course
catastrophic
and
for
protocols,
like
multimedia
any
of
you,
who've
lived
in
the
world
of
sip
and
SDP.
Understand
that
that
getting
the
synchronized
state
machines
of
going
in
one
direction,
an
SDP
offer
answer
going
in
the
other
direction
has
been
just
a
horrible
mess
for
ten
years
next
slide
dip.
D
G
B
D
B
E
B
Right,
okay,
I'm
getting
even
closer,
now
obvious.
Why
had
a
really
nice
animation?
So
if
I've
been
able
to
use
PowerPoint,
it
could
have
all
been
a
lot
easier
to
see,
but
I'll
walk
you
through
it
very
quickly.
So
if
you
work
top
to
bottom
with
a
consumer
forward
and
the
producer,
the
consumer
issues,
an
interest
message
with
this
name
or
the
e
being
a
certain
producer
and
includes
an
extra
field
of
the
interest
message
called,
which
has
a
value
of
noted
here
of
x1
all
talking.
B
This
creates
two
pieces
of
state
in
the
forwarder
creation
it.
It
also
creates
a
special
fib
entry,
which
points
back
to
the
face
that
the
interest
arrived
on
from
the
consumer.
This
then
arises
the
producer
who
can
create
the
space
is
used
for
change,
but
also
has
some
state
that
it
can
use
messages
that
are
reflexive
going
back
the
other
way.
So
this
shows
one
instance
of
a
reflexive
interest
going
back
through
the
same
forwarder.
Creating
a
pit
entry
and
reaching
the
consumer
consumer
does
what
he
does
with
it.
B
Which
comes
back
pit
entry
reaches
the
producer
and
then
the
producer
completes
the
entire
exchange
with
the
original
data
method.
So
what
we
have
here
is
normally
a
four-way
in
shape
and
it
could
be
turned
into
a
three-way
handshake,
so
if
the
application
doesn't
actually
Bank
next
slide,
okay,
so
the
machinery
for
this
is
relatively
straight
forward.
We
define
a
new
name
component
type
and
remember:
a
CCN
has
a
long
had
type
named
components.
Nbn
originally
had
some
syntactical
conventions
for
names
and
now
also
have
armed,
explicit
name
component
types.
B
B
Random
number
chose
that
to
have
enough
entropy
so
that
you
can
identify
the
consumer
with
high
probability
for
the
duration
of
an
exchange
actually
much
longer
than
the
duration
of
exchange,
certainly
for
the
duration
of
any
reasonable
exchange,
and
since
we
use
a
different
value
of
this
for
each
initiated,
intrastate
exchange
this
limits
any
kind
of
link
ability
you
can
get
from
reuse
of
these
identifiers.
So
you
can,
you
can
construct
a
variety
of
different
interaction.
B
B
B
D
B
An
interest
arrived
my
name
briefly.
This
is
very
easy.
The
one-bite
check
on
so
that
you
don't
have
to
traverse
a
full
longest
name:
prefix,
match
style,
fib
entry
entries
or
offer
those
things
and
you'll
see
in
the
next
talk
on
the
miss
forward
or
there's
some
ideas
they
have
for
like
that,
just
as
efficient
as
as
any
type
of
hash
lookup,
and
then
this
same
entry
is
consumed
along
with
that
pit.
D
B
Through
some
use
cases
because
I
think,
if
you
don't
believe
these
motivating
use
cases,
a
lot
of
change
to
the
architecture,
to
interest
for
for
no
good
reason,
so
we'll
walk
through
three
years
cases,
let's
start
with
remote
method,
invocation
next
slide.
So
historically,
what
happened
was
a
couple
years
ago,
a
bunch
of
us
to
find
a
whole
protocol
for
doing
remote.
B
D
B
D
B
The
returning
data
message
or
as
write
potentially
can
do
it
by
returning
a
name
call
of
a
func
which
is
a
handle
on
the
results
so
that
the
consumer
can
later
pull
for
the
result
and,
if
there's
a
long-running
computation
next
slide.
B
B
Excuse
me
those
call
that,
and
only
at
that
point
does
the
producer
need
to
commit
any
resources
for
the
computation,
at
which
point
it
can
return
a
thumb
same
to
the
consumer,
perform
the
computation,
the
consumer
after
waiting
a
while
there's
a
way
to
in
the
funks
to
say
how
long
the
produce
the
consumer
should
wait
to
ask
for
the
result.
It
issues
an
independent
interest
to
fetch
the
results
which
then
comes
back.
So
that's
how
reflexive
interests
are
used
or
remote
indication
next
slide.
B
Umm.We,
so
a
number
of
papers
in
the
early
early
time
of
icy
and
pointed
out
that
for
restful
web
interactions,
often
the
request
message
in
in
HTTP
is
bigger
than
the
response
papers
published.
That
shows
is
that
the
asymmetry
can
be
quite
quite
dramatic.
So
what
we'd
like
to
do
is
again
keep
the
request
wall
by
only
placing
the
actual
URI
for
the
request
in
the
interest.
Messages
then
turn
around
and
get
all
the
parameters,
including
any
authorization
of
via
reflexive
interest
and
all
the
HTTP
group
cookies,
except
through
headers.
D
B
An
eclectic
interest
returning
the
data
via
a
regular
data
message
now
this
obviously
compared
with
HTTP
and
theory.
Aude
gives
you
an
extra
half
round-trip
than
otherwise
wouldn't
be
needed.
But
then,
if
you
look
at
how
he
actually
runs
over
quick
or
HTTP
there
going
to
be
multiple
round
trips
through
the
TCP
acts
anyway.
On
next
next
slide,.
D
B
Can
wake
up
on
a
timer
or
venj
of
new
data
being
available
and
issue
an
interest
message
which
is
effectively
a
phone
home
call
to
an
application
gateway
or
an
indiana
repo
type
of
element?
This
in
turn
provokes
a
reflexive
interest,
ADAC
being
initiated
back
from
the
Gateway
toward
the
sensor.
Now,
no.
B
Work
I'll
point
out
that
in
a
lot
of
IOT
application,
people
care
perhaps
more
about
the
identity
of
the
Gateway
than
they
do
about
the
identity
and-
and
hence
it's
perfectly
okay-
for
the
for
the
Gateway
to
repackage
the
data.
Coming
back
as
a
IP
end,
data
object
with
its
own
with
a
main
role
gateway
and
sign
and
encrypt
it.
So
their
next
slide.
B
So
here's
the
example
of
protocol
ladder.
The
sensor
can
wake
up
key
issues,
a
phone
home
to
the
Gateway
as
a
producer
who
forms
a
reflexive
interest
requesting
the
data
returned
data
is
returned
and
is
a
big
result
is
stored
and
either
you
can
complete
a
four
way
handshake
arm
or
you
can
just
let
the
interest
timeout
on
in
the
in
the
graph.
There's
some
suggestions
about
what
interesting
information
could
in
fact
be
returned
by
the
Gateway
to
answer
things
like.
Perhaps
how
long
to
wait
to
wake
up
again
and
also
the
ability.
D
B
B
Some
of
the
implementation
questions
but
go
read
this
the
dress,
because
it's
very
hard
to
summarize
a
lot
of
the
stuff.
That's
in
there.
So
for
forwarders.
If
you
have
a
low-end
device,
it's
only
going
to
be
forwarding
a
second.
You
don't
really
need
to
worry
about.
There's
very
straightforward
implementation
technique
will
do
the
job
just
fine,
but
for
high
speed
forward
or
we're
about
to
hear
about
one
in
a
minute
you,
you
have
to
be
very,
very
careful
about
both
memory
access
and
one
of
the
things
that
this
changes
is
the
assumption.
D
B
So
what
went,
but
when
handling
interest
a
high
speed
forwarders
in
general,
don't
use,
don't
have
a
pit
as
a
global
data
structure.
They
shard
it
some
way.
So
if
any
operations
from
reflexive
interests
that
request
the
require
your
lookups
or
even
worse,
updates
across
shards.
That
can
be
really
tricky.
So
there
are
two
ways
in
a
high
speed
forward
and
deal
with
this.
One
is
soup
just
avoid
cross
shard
updates,
high
early
or.
G
B
There's
also
some
interesting
interactions
with
interest
lifetime,
because
it's
generally
very
hard
for
a
multi-way
interaction
for
a
consumer
to
actually
pick
a
good
interest
lifetime.
So
the
drafts
suggest
some
ways
that
forwarders
could
arbitrarily
inflate
interest
lifetimes
counselor.
B
D
B
B
For
consumers
on
you,
the
consumers
change,
because
instead
of
having
these
independent
data
exchanges
and
disrupting
names,
you
have
a
different
sort
of
API
and
interaction
for
multi
way.
Exchanges
through
with
the
with
the
rest
of
the
system,
the
choice,
the
choice
is
that
a
consumer
has
is
when
it
responds
to
a
reflexive
interests.
Already
sort
of
mentioned,
which
is
anything,
is
a
plain
data
message:
it's
the
lifetime
of
what
it's
returning
meant
to
stay
inside
the
single
interaction
or
you
can
encapsulate
a
whole
data
message
with
this
insane.
B
If,
if
you're
returning
data,
whose
lifetime
is
meant
to
survive
beyond
the
existing
exchange,
and
then
you
set
the
other
fields
appropriately
for
the
data
and
won't
go
into
the
there
is
one
additional
complication,
because
the
state
is
in
the
forwarders
for
the
producer
to
bombard
a
consumer
with
reflexive
interest.
It's
nice
for
the
consumer
to
be
able
to
stop
that.
The
producer
is
misbehaving
and
there's
a
way
to
do
that.
Next
slide.
B
So
I'll
end
with
one
piece
of
pretty
bad
news.
This
is
not
backward-compatible
since.
B
An
unbroken
chain
of
forwarders
to
support
this
or
things
don't
really
work
very
well.
So
we
suggest
three
possible
ways
to
overcome
this
backward
compatibility
problem.
One
is
to
ignore
it,
which
I
described
how
you
might
get
away
with
this,
but
don't
really
recommend
that
is
the
best
way
forward.
We
could
bump
the
protocol
version
number
which
deals
with
it
very
nicely.
Anybody
adding
that
TLD
would
have
to
bump
the
version.
A
A
B
Just
reject
the
interest,
so
that's
really
simple,
but
it's
a
really
big
hammer
and
we
want
to
think
carefully
before
we
pull
that
big
hammer
out
and
then
the
third,
which
is
my
personal
favorite,
but
also
the
hardest
to
accomplish,
is
let's
create
a
capabilities
exchange
protocol.
So
forwarders
know
the
capabilities
of
the
next
off,
so
they
can
decide
whether
to
forward
something
on
the
next
opera,
not
depending
on
the
capabilities
of
that
next
pop,
but
that
it
course
requires
a
whole
new
set
of
work
to
construct
such
a
protocol.
B
Coding
changes,
that's
the
simplest
part
of
the
whole
thing.
It's
one
new
DLC
with
LD
type
and
a
64-bit
integer
next
slide.
You
so
for
security.
I.
Think
it's
important
to
point
out
that
the
big
motivations
for
doing
this
work
in
the
first
place
with
improve
security,
and
it's
motivated
by
improving
both
security
and
privacy
by
avoiding
payloads
and
inference
enough
to
be
fine.
All
the
associated
vulnerability.
B
Tax
on
producers,
it
avoids
routable
name
prefixes
for
consumers,
so
they
aren't
exposed
to
acts
of
various
sorts
and
it
invoice
sending
names
that
can
be
crafted
by
consumers
to
producers
which
can
open
up
reflection
attacks.
So
we
view
this
as
actually
an
improvement
to
security
over
the
existing
ways.
People
have
tried
to
make
these
capabilities
happen
with
ICN
slide.
B
B
One
for
your
crypto
algorithms
anyway,
so
just
sure
that
your
your
64-bit
random
values
are
actually
produced
by
a
crypto
quality.
Random
number
generator.
These
do
produce
extra
resource
pressure
on
the
pic
did
so
they're
more
expensive
and
compute
in
memory.
So
you
may
need
some
resource
allocation,
algorithms
or
orders
that
put
these
in
a
separate
resource
category,
so
they
don't
overrun,
simpler
requests
and,
lastly,
from
a
privacy
perspective,
you
know
we're
in
the
same
world
of
privacy
as
heart
or
ICN
protocols,
because
they
leak
names,
and
this
is
no
different
in
that
regard.
B
Just
based
on
the
interaction
path
now
this
would
obviously
be
true,
even
in
the
absence
of
reports
of
forwarding,
it's
only
to
point
out
that
if
you
start
using
ICN
for
these
more
complicated
multi
interaction
use
cases,
those
interactions
have
patterns
detected
by
surveillance
and
produce
mobility
and
I.
Think
I'm
done
next
slide.
I
think
is
the
final
slide.
D
B
B
B
A
A
J
I
was
just
curious:
what
happens
to
those
entries
in
the
bid,
if
you
just
do
it
through
a
handshake,
eave
them
time
out?
Are
they
open
for
anybody
to
use?
And
what's
going
to
happen
to
that,
you
had
this
like
in
the
early
slides.
You
were
talking
about
those.
You
were
showing
a
four-way
handshake,
but
you
said
that
offered
you
could
just
be
a
three-way
handshake
it
we
don't
send
the
last
messages
curious.
What
what
kind
of
present
they
use!
Those
well.
J
B
Like
videos,
it
just
times
out,
like
any
other
interest,
data
exchange
don't
respond
to
the
interest.
I,
don't
actually
recommend
that
for
almost
any
of
these
use
cases,
but
some
people,
if
there
may
be
cases
where
the
cost
of
the
extra
bandwidth
sends
a
final
response,
is
sufficiently
high
compared
with
the
cost
of
keeping
the
state
until
it
times
out,
makes
the
trade-off
in
favor
of
the
timeout.
E
D
E
H
And
ent
PDK
is
undone
forwarder,
/
native
Ethernet,
without
using
any
overlays.
Our
goal
is
to
is
land
speed
of
already
on
commodity
hardware
and
the
so
far
we
have
achieved
a
hundred
and
six
gigabits
per
second
between
two
parts,
and
so
forwarders
design
has
a
parallel
architecture
so
that
we
can
use
multiple
CPU
costs
to
process
traffic.
Now
the
folder
has
efficient
data
structures
in
pre,
allocated
memory
pools
then
by
using
TDP
decays
that
they
have
plane
development
market
and
we
can
use
the
user
space
PCR
drivers
with
hardware
or
floats.
H
This
diagram
shows
us
the
architecture
of
the
forward
it
it's
divided
into
three
stages:
issue
are
some
threads
or
cylinder
to
CPU
across
on
the
left?
Left
side
is
the
input
stage,
the
input,
the
thread
that
will
receive
packets
from
the
hardware,
Ethernet
interface
and
decoders
and
perform
an
DLP
reassembly
if
necessary,
then
determine
which
you
forward
in
surrender
should
handle
them
by
doing
name,
lookups
and
other
dispatch
messages
and
I
will
explain
later
in
the
forwarding
stages
of
all.
These
threads
implements
the
Indian
protocol.
H
First
I
explained
how
the
fever
works.
Is
a
fever
look
up,
but
we
use
two
stages
and
longest
prefix
match
algorithm.
This
algorithm
is
inspired
by
a
NCS
13
paper
named
the
data
networking
on
a
router
fast
and
the
dust
resistant
forwarding
with
hash
tables
other
than
settle
with
others.
In
fact,
I
designed
the
risk
with
other
musings,
so
the
management
if
management,
wants
to
update
as
a
fit
like
insert
or
remove
next
house
if
they
needed
to
do
it
as
Rossi.
You
read
the
copy
update.
This
is
to
achieve
the
safety.
H
Then
each
Reaper
entry
has
a
pointer
to
the
forwarding
strategy
forward.
Inspiratory
is
the
friendship
that
it
determines
how
to
send
how
to
forward
the
interest
and
is
a
strategy
will
have
opportunity
to
observe
when
the
data
comes
back
and
the
zest
or
the
measurement
such
as
relative
time
of
each
legs
help
the
measurement
is
spelled
on
the
furniture
and
therefore
the
measurement
of
granularity
is
the
same
as
the
prevention.
H
When
strategy
needs
to
update
a
feeble
measurement,
it
can
do
so
without
going
through
ICU
for
efficiency
reasons.
But
this
also
means
issue.
Forwarding
shredder
needs
to
have
its
own
fiber
partition
so
that
multiple
forwarding
thread
or
multiple
strategy
cannot
update
the
feedback
and,
at
the
same
time,
which
would
be
unsafe.
H
The
pit
Peter
we
have
Peter
shouting.
We
have
Peter
shouting
algorithm,
so
each
forward
instructor
has
a
private
of
Peter
instance.
The
pit
itself
is
a
hash
table
and
it's
implemented
with
non-threaded
safe
data
structures,
which
is
somewhat
more
efficient
than
the
thread
a
safe
counterpart,
but
because
of
Pitts
outings
there
are
two
requirements
are
interested.
Pershing
first
is
to
interest
with
the
same
names.
They
must
go
to
the
same
pit,
because
this
is
required
for
interest
aggregation.
H
The
second
requirement,
if
they're
multiple
interest,
has
the
same
name,
prefix
notice,
the
same
name
by
sharing
the
prefix.
They
also
should
go
to
the
same
pit,
because
this
is
needed
for
effective
strategy
decision.
Sinston
forwarding
strategy
operates
unnamed
spaced
granularity,
the
collector
measurement
elector.
So
the
solution
is
with
these
patches
that
interested
by
the
hash
of
its
first
or
two
name
components.
So
this
is
how
it
works.
H
We
have
in
the
impulse
read,
we
have
the
name
dispatch
table
or
an
DT
and
DT
is
maps
from
the
hash
of
name
prefix
to
a
forward
insert
ID.
Indeed,
he
is
read
as
if
it
is
an
array
of
atomic
int
and
and
the
because
the
key
is
a
marquise
is
the
hash,
so
many
name
prefix
water
here
is
the
same
mentioned
in
the
importance
read
when
the
interest
come
see.
The
impulse
read
were
computers,
the
hash
of
first
to
name
component.
H
Of
course
the
number
is
configurable,
then
using
the
highest
value
it
were
taken,
the
lower
the
lower,
usually
supposes
the
NDT
is
64k
entrance
say
it
will
take
the
low
16
bits
of
the
NDT
of
the
NDT
index
that
it
were
founded
where,
where
that
entry
is,
if
that
I
said
inside,
that
answer
is
a
forwarding
thread,
ID
so
index.
Examples
that
the
nditi
entrances
depends.
The
name
prefix
correspond
to
for
the
threads
emitter
goes
to
for
were
destroyed
one
and
then
the
interest
goes
into
that
fit,
but
pity
this
a
kind
of
pitch
shouting.
H
H
But
there
is
a
corner
case
that,
where
name
dispatching
stops
working
because
in
India
there
is
a
prefix
match
so
when
in
suppose,
I
have
interest
name
slash
ages,
one
component,
which
also
that
we
be
prefix
element
then
set
to
interest,
will
go
to
welcome
to
the
undie
t
entrant
determined
by
the
height
of
/a.
But
when
the
data
through
the
paper,
the
paper
name
is
a
be
one.
H
So
the
solution
is
we
introduce
a
hopper
by
hopper
Hydra
field,
a
call
to
the
pit
pokin,
and
then
we
use
letter,
P
tokens
to
associate
interest
and
data
pip
spoken
is
opaque,
a
token
that
encodes
forwarding
straight
ID
and
the
pit
entry
index
every
outgoing
interest
needed
to
carry
this
pit
token.
It
is
a
64-bit
field
in
the
link
layer,
header
or
in
India.
It's
called
undenied
and
it's
to
help
I
hope.
H
Then,
with
the
data
and
the
neck
when
Turk
inspections,
the
upstream
know
that
muster
put
to
the
same
pit
stop
in
in
the
date
in
the
and
the
ARP
hider
of
the
date
on
the
neck
and
then
using
the
forward
insert
ID
portion
of
the
pit
token
it's
here
so
forwarding
so
supposed
supposed
to
pit
were
forwarded
as
the
interestin
sap
it
program
has
four
days.
Real
ID
at
is
what
it
goes
to
absolute
when
it
come
back.
The
data
also
need
to
carry
the
same.
H
People,
including
this
forward,
is
reality
then
back
in
the
input
thread
is
it
will
be
able
to
put
it
in
post
rather
can
look
at
that
part
and
the
cop21
handle
the
interest.
So
with
this
patriot
to
forward
in
straight
away,
the
rest
of
a
token
is
used
for
accelerating
a
Peterloo
couple
is
used
to
accelerate
to
p2
lookups,
but
it's
not
a
required
for
correctness.
H
And
since
then
we
then
we
go
to
the
content,
stop
so
content
store
is
a
hash
table,
but
in
India
we
we
have
to
support
a
prefix
match.
One
of
the
undie
and
design
principle
is
in
network.
Did
in
network
name
discovery,
the
interest
should
be
able
to
use
incomplete
names
to
retrieve
data
packets,
but
the
hash
table
doesn't
support
our
president.
Only
supposed
exact
match
it
doesn't
support
prefix
match,
so
our
solution
is
introducing
indirect
entrance.
So
this
here
is
an
example.
H
Suppose
I
senders,
I
sender,
the
interest,
a
B,
but
as
the
data
come
back,
is
called
a
B
win
in
this
case,
I
am
going
to
insert
the
two
entrances
to
the
CS,
so
a
B.
What
is
this
a
B?
Well,
if
it
is
the
data
name
entry
and
then
it
this
is
called
a
direct
entry
enter
and
it
carries
the
data
packet
I.
Also
inserted
I
indirect
engine
called
a
B.
A
B
is
named
after
the
interest.
H
So
the
assumption
is
when,
when
consumers,
who
want
to
use
name
discovery,
it
will
be
using
the
same
interest
name
to
do
name
discovery.
So
if
the
consumer
want
to
fund
it,
usually
the
consumer
will
always
send
the
interest
name.
Maybe
it's
not
going
to
send,
especially-
and
in
this
case
so
when
the
next
interest
is
named,
that
either
a
B
one
or
a
B,
it
can
find
this
to
CS
engine
because
it's
in
the
hash
table,
but
in
case
the
next
inference
name
is,
is
such
a
it's
not
funded.
H
It
has
to
be
center
to
the
producer,
but
I
hope
this
does
not
happen,
because
that's
something
well
to
the
input
thread.
So
we
have.
We
are
using
some
some
of
the
hardware
offloads
supported
by
the
ethernet
adapters,
but
today,
most
instantly
adapter
only
support
ISS
routes
receive
side,
the
scanning
routes
that
measuring
on
Ethernet
and
IP
header
fields,
and
so
far
I
have
been
using
it
to
support
operating
multiple
phases
on
the
same
Ethernet
adapter
and
the
distinguish
the
bias
America
dress.
H
The
preliminary
benchmarking
shows
that
when
I
have
more
than
800
forward
in
straightest
impulses
rather
becomes
a
bottleneck
and
also,
if
the,
if
the
computer
server
machine
has
a
two
CPU
to
Numa
socket
and
the
traffic
need
to
cross.
No
matter
the
Neumann
boundary,
it
will
reduce
this
report
by
between
12
and
25
percent.
So
but
as
a
current
ISS
rules
are
not
a
powerful
enough
for
eliminating
this
bottleneck,
so
we
need
a
better
ISS
rules.
I
also
found
a
some
mix.
H
They
support
EBP,
F
and
FPGA,
but
as
the
products
are
very
limited
and
the
development
hosta
is
quite
high,
especially
for
FPGA.
Basically,
each
feature
player
has
either
separate
the
development
cost
development
efforts,
but
what
I
wish
that
was
the
ethernet
adapter?
Could
the
support
in
in
the
dispatching
in
the
in
the
filtering
functionality
is
first,
if
I
hope
they
can
match
they
can
match
or
offset
into
the
data
portion
of
the
ethernet
frame?
H
If
I
can
get
that
feature,
I
will
be
able
to
distinguish
interest
versus
data,
and
that
in
case
the
packet
is
a
data.
I
can
also
use
the
hard
way
to
reduce
the
first
octet
on
the
forwarding
straight
ID
portion
of
the
P
program.
So
I
can
place
the
data
straight
to
the
foldings.
Read
another
corrector
numa
socket.
H
So
there
is
somewhat
a
fewer
new
model
crossing
when
it's
being
forwarded,
but
of
implication
is
this
could
need
minor
changes
to
the
hop
by
hop
of
header
fields,
but
it
does
not
affect
the
network
layer
at
all
and
also
I
wish.
The
network
has
supported
randomly
dispatching
to
multiple
queues
so
so
that
I
can
use
more
than
one
input
is
ready
to
decode
and
process
interest
from
the
same
NIC
in
parallel,
but
ultimately
is
I
hope
liquor.
H
Can
the
Ethernet
adapter
can
understand
the
some
of
the
DN
semantics,
but
that's
ten
years
from
now
on,
Mullin
is
supporter
for
increasing
Dodgers
increase.
The
digester
is
only
on
protocol
feature.
It
allows
the
interest.
The
final
component
interest
would
be
the
shell
five
six
pages
of
the
whole
data
packet
and
the
forwarder
one,
either
to
to
digest
the
computation
on
the
data
packet
in
order
to
determine
whether
the
interest
within
please
ideas
could
be
satisfied,
but
the
digest
computation
is
slower
than
all
the
regular
forwarding
workloads.
H
So
it's
so
I
cannot
really
do
that
in
a
forward
instead.
So
what
so?
The
solution
is
introducing
a
crypto
helpers
read
when
the
fall,
when
the
forwarding
thread
when
the
forwarding
thread
receives
the
interest
set
and
it
or
a
data,
and
it
determines
that
the
various
computation
is
necessary.
It
will
pass
that
packet
to
the
crypto
help
us
read:
Oscar
the
crypto
helpers
riddle
to
computers
of
edges,
then
forward
is
read
that
can
continue
with
the
next
packet.
The
inequivalent
help
us
read
it.
It
will
invoke
as
a
DVD
k,
crypto
device.
H
Supports
the
fallen,
a
hint
forwarding
hinter
is
a
routing
scalability
solution.
In
the
example
up
there,
the
interest
name
is
notable
because,
but
the
forwarding
hinter
can
have
one
or
more
routable
names,
then
the
end
of
inner
forwarding
threader's,
who
we
were.
We
will
look
up
the
sweeper
first
with
each
of
the
forward
delegation
names
in
the
forwarding
hint
then
the
first
delegation
name
found
in
the
fiber
is
called
as
a
children
for
were
different.
H
Then
then
we
can
forward
as
the
interest
according
to
that
free
boundary
and
also
interest
which
is
different,
are
chosen
for
when
a
hinter
cannot
be
aggregated
in
a
pit.
This
is
to
avoid
this
scenario
similar
to
cache,
poisoning
and
then
in
for
the
data
data
will
be
maxing
to
the
pit
henchman
using
the
pits
program,
so
I
know
which
we've
already
hinted,
brought
aback
with
the
data
because
of
the
pit
program
and
also
to
prevent
a
cache
poisoning.
Content
store
of
each
one
chosen
for
the
hint
is
also
logically
isolated.
H
So
if
the
two
interests
with
different
our
children
for
any
hint,
they
cannot
aggregate,
they
cannot
implicitly
fabrication
at
all.
Finally,
I
have
some
ideas
before
reflected
forwarding.
It
does
not
me
I
committed
to
do
it.
It's
just
if,
if
I
am
to
implement
reflector
forward
the
email
forward,
architectural,
what
I
would
do
first
is
is
a
suggestion
for
the
first
and
forward
interrupt
ourselves.
H
I
need
to
update
as
a
fib
of
run
forward
instead,
but
if
I'm,
but
since
I
have
ICU
and
I
try
to
do
as
you
front
forwarding
thread,
it
would
be
too
slow,
so
I
think
I.
Should
they
just
skipped
the
fib
and
they
use
only
the
pit
to
determine
the
forward
impulse.
Then
I
wanted
to
reflect
the
interest
to
contain
the
pit
token
of
the
original
interest,
but
not
in
the
name.
H
It's
in
the
forwarding
hint-
and
this
does
mean
that
this
for
the
hint
of
the
people
cleaner
for
when
a
hint
will
change
hop
by
hop.
But
a
benefit
is
the
reflective
interest
and
also
the
data
packets.
They
can
have
normal
names,
they
don't
need
as
a
special
name
component
in
front.
They
don't
depend
on
consumer
being
able
to
generate
a
good,
the
random
numbers
and
they
don't
require
the
original
consumer.
To
encapsulate
the
data,
if
the.
H
Signature
is
important
Jen.
You
know
that
in
a
forwarder,
the
importance
rather
would
say
so
forward,
we'll
be
able
to
identify.
Our
interest
is
a
reflexive
interest,
because
the
the
forward
isn't
and
so
forwarded
hint
us
start
with
a
reflective
component,
and
then
it
can.
The
impostor
will
dispatch
with
this
form
a
hint
in
instead
of
computing,
the
height
of
the
interest
name
then,
for
the
forwarding
thread.
H
It
will
find
the
potential
of
the
original
interest
using
the
speed
token,
and
then
it
also
needed
to
verify
the
current
reflective
interest
matrices,
a
reflective
name
in
the
original
interest
and
the
original
consumer
didn't
ATO,
prohibit
to
further
reflection.
Interest
from
being
forwarded.
So
here
is
a
Brewer.
Is
the
original
interest
data
exchange
and
the
yellow
one
is
reflexive.
We
can
see
the
blue
for
pit
token
on
the
upstream
side
upstream
will
put
the
same
P
token
in
the
following:
a
hint
okay.
The
last
page
is
the
references.
Okay.
Thank
you.
E
B
B
H
B
B
H
D
H
I
D
I
D
H
E
E
E
D
Hi,
hello,
everyone
so
I
am
under
jungim
and
I,
along
with
Prakash,
Souter
and
melancholic.
We
are
presenting
the
queues
which,
for
the
informations
and
networks,
and
we
have
leased
a
hosted
last
update,
which
is
0-2
and
here
are
the
updates
of
the
changes.
So
what
we
added
is,
above
the
discussion
of
the
network,
resources
to
be
controlled
and,
and
mainly
those
are
linked,
content
store
and
corporate
memory
that
is
fit
and
they
compute.
D
Addition
to
that,
we
have
introduced
the
Q
s
marker
into
an
hop-by-hop
I.
Did
our
initial
lot
zero?
One
draft
was
talking
about
us
market
as
a
is
a
name
based
encoding
and
give
you
some
constructive
feedback
front
wave
about
some
of
the
potential
challenges
involved
with
an
approach.
So
you
have
another
option
using
hop-by-hop
header
and
then,
depending
on
the
feedback
from
the
community
and
there's
some
of
the
experiments
that
we
are
doing,
we
will
decide
which
P,
which
way
is
the
more
better
and
or
Vantage's.
D
Then
we
have
the
improved
with
scaling
design,
in
which
case
the
the
marker
state
testing
for
measure
is
now
stored
as
far
to
be
interface
rather
than
as
an
explicit
country.
So
this
will
help
to
reduce
some
smokers.
Whitlow
that
is
going
to
create
and
I
will
explain
couple
of
use
cases
where
what
I
mean
by
that
and
talk
about
some
introduction
to
the
he
was
a
remarketing
scheme.
D
D
D
So
this
tables
of
my
three
euros
treatment
by
ups
and
the
network
resources
that
in
the
reference
and
side
it
is,
this
has
some
overlap
with
people's
done
by
the
IOT,
and
here
we
redefine
the
resource,
fibers
or
incapacity.
The
contents
or
capacity
/
forward,
MMP,
essentially
third
and
compute
capacity
plus
sign
here,
can
when
decades
as
the
increased
resource.
D
We
are
working
on
some
of
the
experiments
which
will
we
have
where
we
learn
our
peer
data
or
under
trending
and
in
the
case
the
curious
modeling
or
the
treatment.
Modeling
is
the
joy
of
things
where
the
ability,
or
you
know,
handling
number
of
traffic
classes.
Given
the
total
amount
of
memory
we
have,
let's
say
on
the
pivot
or
in
the
cache
and
the
the
processing
capacity,
and
the
second
is
the
trade-off
between
the
ability
to
express
the
type
of
us
treatment,
given
the
protocol,
encoding
ability
and
the
algorithmic
implementation.
D
So
now
what
we
have
seen
in
traditional
IP
world
is,
you
know
we
have
only
limited
TCP
codes,
whereas
in
Indian
we
can
have
more
space
to
encode
the
QoS
treatment.
Now,
whether
we
need
just
you
know,
let's
say
six
digits
or
more,
you
know,
let's
say
one
byte
or
to
the
height.
It
depends
on
like
how
how
expressive
the
QSB
can
be,
and
then
that
is
where
the
the
TLB
based
approach
is
more
providing
more
and
more
prodding
more
opportunities.
D
This
will
be
proposed
design
for
the
TLB
encoding
for
the
home
they're
based
QA
smart,
and
we
introduced
it
as
we
would
like
to
introduce
as
a
mandatory
hell
by
operator
just
to
make
the
semantics
of
this
header
that
it
is.
It
has
to
be
forwarded
by
every
router
to
the
next
home
so
that
every
you
know
the
downstream
router
has
the
opportunity
to
see
what
the
QoS
treatment
is
intended
by
IP
or
original
consumer,
and
then
it
acts
on
it
and
right
now
we
have.
D
We
are
proposing
it
as
a
one-bite
field,
which
is
like
8-bit
qsr
field
and
depending
on
the
type
of
treatments
and
then
combinations.
We
will
see
whether
one
one
bite
is.
You
know,
for
you
know
how
we
can
break
it
or
maybe
make
a
use
of
to
buy
it
since
one
so
that
we
don't
have
final
clarity
on
that
as
as
yet.
D
Moving
to
the
next
slide,
Oh
in
the
Q
s
of
a
forward
and
design
or
the
pit
design.
As
we
said
earlier
in
the
original
draft,
we
were
having
explicit
entry
for
every
Q
s
marked,
or
you
know,
duplicating
this
with
having
different
us
markers.
Now
we
have
changed
it
to
you,
know:
interface
based
of
state
secretary,
so
the
Q
s
marker
will
be
saved
as
part
of
the
interest
data
structure.
Rather
than
the
the
explicit
quit
entry
and
the
interface
data
structure.
D
It
can
be
enhanced
to
save
the
Q
s
marker
and
there
are
two
use
cases
which
we
already
documented
in
our
previous
submission,
but
just
for
the
sake
of
completeness
with
the
newer
design
of
this
interest
in
the
bit
I'm.
Just
reiterating
those
two
use
cases
here,
and
one
is
the
case
where
you
know
we
keep
implicit
interest
with
higher
u.s.
markers
and
received
on
the
same
interface
and
I'll,
explain
how
this
is
possible
and
the
second
you
hope
you
know
releasing
interest.
D
So
when
I
say
duplicate
interest
is
the
interest
with
the
same
content
name
and
the
second
interest
that
is
received
having
the
highest
fee
use
marker,
but
received
on
a
different
interface.
Now,
look
at
this
a
little
bit
entries
the
interest
one
is
received
on
both
are
received
on
phase
one,
but
one
is
the
interest.
One
is
having
the
qsr
marker,
1
and
Q
s.
Marker
2
and
the
Q's
marker
2
is
B
the
high
priority
interest
compared
to
the
Q
s
marker
1.
D
So
in
this
case
we
will
be
forwarding
the
second
interest
as
well,
and
this
is
where
the
the
pre
tag
regression
relaxation
is
going
to
take
place.
This
is,
is
going
away
from
the
the
the
well-known
aggregation
mechanism
of
the
pit
here,
but
this
is
the
P.
So
this
is
the
we
don't
want
to
call
it
as
a
limitation,
but
this
is
the
price.
Maybe
we'll
have
to
pay
for
the
implementation
of
the
Q's,
because
this
is
a
possibility
now
and
the
only
other
perspectives,
and
only
in
the
local
Lewis.
D
D
And
as
far
as
the
queues
it
every
at
concern,
the
data
delivery
will
be
handled
based
on
the
EQs
marker
state
that
was
saved
into
the
pit.
So
we
don't
have
to
have
the
Bureau
the
Q
s,
treatment,
which
was
like
going
into
the
upstream
routers
using
through
an
interest
that
need
not
come
back
into
the
data
packet
so
that
that
that
one
you
know
change.
That
is,
you
know
we
are
doing
here.
So
the
pit
entry
in
Q
s
marker,
depending
on
on
which
interface
that
was
received.
D
If
it
is
the
back
end,
data
for
data
packet
was
forwarded
on
the
downswing
means
facing
the
highest
q.
S
marking
recorded
at
the
interface,
then
it
will
go,
get
will
go
with
the
the
data
packet
will
go
back
to
the
downstream
into
the
router
on
using
the
givers
marker
and
in
case
two.
If
data
packet
is
forwarded
on
the
round
interface
with
the
actual
q
s
marking
recorded
at
the
each
interface.
D
If
router
decides
to
remark
the
QoS,
for
whatever
reason
the
upstream
router
now
does
not
have
the
you
know
track
of
the
you
know
knowledge
about
what
was
the
original
viewers
that
be
intended
by
the
consumer.
So
it
is
always
working
based
on
the
previous
routers
projection
projection
or
the
Norton
case.
So
in
the
remarking,
the
original
Q's
marker
is
also
preserved.
D
I
press,
the
intermediate
router
can
now
remark
it
and
then
still
forward
both
the
Q
s
markers
to
the
next
next
router,
so
that
the
the
next
up
router
can
decide
whether
based
on
the
vacuous,
are
based
on
the
additional
intention.
So
that
is
the
the
final
change
that
we
are
talking
about.
Instead,
we
will
discuss
more
about
this
remarking
scheme
as
well
as
incredible
encoding
in
into
our
next
submission.
D
So
we
will.
We
are
looking
for
the
more
feedback
and
more
comments
from
the
from
the
group.
So
far,
we
received
good
amount
of
them
and
very
constructive
feedback
from
Dave,
which
we
have
incorporated
into
this
work
and
how
the
luca
has
also
agreed
to
review
this
draft,
and
we
will
take
it
from
there.
E
E
Yeah
I
don't
see
any
so
it
seems
that
craft
has
been
getting
some
feedback,
and
so
there
has
been
some
discussion
on
the
Magnus
so
with
great.
If
we
could
continue
this
and
so
in
general,
the
whole
Q
s
topic
is,
of
course,
really
interesting
and
so
I
think
there
we
have
kind
of
seen
that
there
are
different
approaches
and
yeah.
Let's
continue
working
on
the
technical
work
in
the
group
here,
thanks
again.
B
E
Well,
as
you
probably
noticed,
it's
a
bit
unpredictable
and
what's
going
to
happen,
so
we
could
imagine
that
there's
at
least
one
other
online
meeting
we
are
going
to
hold,
but
really,
let's
not
wait
for
this
synchronous
meeting
so
and
we
can
spin
off
new
works
on
the
mailing
list
and
direct
communication
as
well.
So
let's
just
keep
doing
that
yeah
thanks
for
staying
with
us,
so
it
would
take
a
bit
longer.
Please
stay
safe
everybody
and
hope
to
hear
again
soon,
bye.