►
From YouTube: IETF101-ROLL-20180322-0930
Description
ROLL meeting session at IETF101
2018/03/22 0930
https://datatracker.ietf.org/meeting/101/proceedings/
A
B
A
A
A
D
A
Okay,
we
are
going
to
have
a
presentation
on
spur
a
little
with
the
sixties.
That
Michael
is
going
to
present
him
role
enrollment
priority,
but
is
using
sixties
and
okay.
We
sent
an
email
about
the
IPS
that
we
have
in
the
current
documents.
So
you
have
some
comments
on
that.
Please
into
the
mini
lists
are
now
ice.
I.
B
Wanted
to
make
I
wanted
to
make
a
few
comments.
It's
the
first
role
in
our
room
chairmanship,
which
takes
so
long.
So
it's
a
bit
of
an
experiment
and
I
hope
it
will
turn
out
correctly,
but
we
have
lots
of
new
new
subjects
and
I
think
they're
worse.
Well,
the
discussion,
especially
the
discussion
about
how
to
use
beer,
is
in
role
on
how
we're
going
to
forward
that.
So
we
all
think
that
needs
some
time.
B
F
E
G
E
Problem
with
using
the
standard
objective
functions,
as
described
by
draft
in
kissimmee
as
well,
relates
to
networks
becoming
unbalanced
when
you
use
them,
so
some
nodes
will
get
overloaded
when,
when
you
use
those
objective
functions
that
would
typically
lead
to
lower
Network
and
the
node
life
and
as
a
result,
you
will
have
higher
packet
losses
due
to
kill,
probably
and
higher
packet
delay.
Well,
this
is
very.
E
In
this
case,
a
balanced
version
would
be
like
this,
where
it's
half
and
half
now
the
objective
function
proposed
by
Kassim
will
create
will
resolve
this
problem
because
it
uses
the
channel
count
to
balance
the
network.
However,
if
the
nodes,
don't
all
transfer
data
with
the
same
rate,
the
result
will
not
be
balanced.
So
in
this
case,
this
node
doesn't
actually
send
data
with
the
same
rate
as
the
other
ones.
So
note
that
to
balance
it,
you
will
actually
need
to
make
it
like
this,
so
the
child
node
component
won't
be
the
same.
E
So
new
metric
recorded
packet
transmission
rate
and
in
our
case
we
use
the
packet
transmission
rate
for
each
note.
So
it's
a
local
information,
but
Derek
I
don't
recognize,
but
he
proposed
that
we
could
also
use
an
accumulative
version
of
this,
where
the
packet
transmission
rate
would
be
accumulated.
The
cumulative
thing
for
all
the
subtree.
E
In
any
case,
our
objective
function
basically
select
the
preferred
parent
by
the
parent,
which
has
the
lowest
PBR.
So
this
is
an
example
of
a
dia
which
contains
the
Dagon
sea
and
within
the
dark
matter,
container
data.
You
would
have
something
like
this:
a
new
routing
metric
container
type,
which
needs
to
be
assigned
and
just
a
two
octave
packet
transmission
rate.
So
this
is
the
current
state.
E
We
have
some
very,
very
preliminary
results.
We
have
a
network
here,
we
used
Kentucky
to
simulate
it,
and
we
have
you
can't
really
see
it,
maybe
in
this
diagram,
but
in
basically
the
network
is
heavily
lopsided
or
this
part
so
not
to
and
dates
and
data
much
faster
than
the
non
vult
ones.
So
this,
if
you
know
the
balance
the
whole
network
by
either
ripple
you,
it
won't
work
well,
but
even
with
channel
count,
this
mode
will
typically
have
more
children,
although
it
should
be
more.
E
E
The
load
balancing
network
already
gives
better
results.
Ours
gives
a
bit
better
and
mr.
Huff
tends
to
slightly
worse
due
to
the
load
balancing
issue.
We
also
count
parent
changes,
so
you
will
see
there's
a
lot
of
variability
here.
That's
a
side
effect
of
the
very
preliminary
result,
but
in
this
case
as
well,
our
objective
function
is
a
bit
more
stable
and
deletes
to
fewer
parent
changes,
and
we
also
can't
from
any
di
o--'s
who
are
so.
E
E
So
in
this
case,
we
do
one
and
we
have
some
issues
already
so
after
discussing
the
whole
idea,
we
that
we
need
to
discuss
how
general
we
want
our
solution
to
be
so
a
lot
of
things
that
moment
are
predefined,
so
we
you
have
gotten
to
a
constant
one
idea
would
be
to
either
keep
it
as
it
is,
or
also
keep
a
few
bits
in
the
metric
to
send
also
the
data
unit,
so
packets
are
octet
or
whatever.
Another
issue
is
the
time
unit,
and
that
might
also
be
a
good
idea
to
send
it.
E
G
E
We
could
normalize
the
PTR
to
a
given
capacity
or
you
could
send
the
max
capacity
as
another
field
as
well.
So
obviously,
this
has
some
network
overhead
as
well,
and
we
also
see
that
all
nodes
consume
energy
with
the
same
rate.
So
you
already
have
the
energy
metal
container,
which
you
count
energy
remaining
until
the
end
of
the
battery,
but
I
haven't
seen
something
that
actually
counts
the
relative
consumption.
So
we
can
deduce
that
from
the
to
some
extent
the
how
much
energy
is
remain
with
you.
It
might
be
more
accurate.
We
use
and
replace.
E
Finally,
at
the
moment,
the
packet
transmission
rate-
again
it's
it's
derived
from
what
we
note.
What
a
note
is
forwarding
in
terms
of
traffic
and
it
might
be
a
good
idea
to
because
at
the
end,
what
you
care
about
is
the
whole
path
to
the
root.
So
maybe
that
would
be
more
interesting.
So
do
a
cumulative
version
of
this
metric.
E
So
at
this
point
we
will
have
this
discussion
yesterday.
Some
of
these
metrics
might
be
interesting
to
track
them
to
the
a
B
information
element,
so
that
notes
are
better
able
to
do
the
whole
join
process
when
the
network
obviously
can
be
rolled
into
one
of
the
priority
values
we
discussed
yesterday,
instead
of
being
a
separate
thing.
E
C
G
G
E
H
H
That's
what
I
was
actually
interested
in
was
was
node
seven.
If
you
want
to
also
try
to
balance
the
children,
then
it
seems
that
you
very
much
want
to
make
sure
that
seven
connected
to
node
4
such
that
note
5
has
the
capacity
to
accommodate
no
D
and
I'm
wondering
how
do
you
do
you
kick
seven
over
that
way
because
it's
it's
a
it's
not
sending
traffic
really,
but
it
is
occupying
yeah.
Oh
no,.
E
E
H
But
what
I
understand
is
is
I
can
see
how
you
can,
how
to
in
an
eight,
you
know,
need
to
make
intelligent
choices
and
about
there
I'm
trying
to
understand
how
in
your
algorithm,
does
node
seven,
which
is
not
a
high
sender.
How
is
it
discouraged
from
connecting
to
five
such
that
number?
Five
has
the
capacity
the
child,
not
that
the
bandwidth
compared.
H
E
H
Might
be
good
to
do
some
experiments
where
you
have
enough
nodes
such
that
you
actually
overload
the
child
capacity
and
then
see
if
you
are
still
able
to
migrate
or
partition
your
network,
given
that
you
have
children
that,
like
12,
has
no
choice,
it
has
to
be
the
other
one
got
no
choice,
they
have
to
be
there,
but
7
is
critically.
It
has
a
choice.
Yes
right,
so
these
nodes
are
basically
here
just
to
add
some.
G
E
E
G
D
D
E
We're
talking
about
the
press,
the
traffic
right,
yeah
yeah,
you
might
have
that
that's
a
problem
with
all
the
objective
functions.
Whenever
you
have
like
some
up
the
changes
you
need
to
be
able
to
adapt
and,
depending
on
the
video,
how
quickly
you,
how
often
we
get
the
D
iOS,
they
update,
might
take
some
change
and
you
might
have
some
problems,
yeah
for
sure.
A
E
E
This
draft
we
have
presented
it
again
in
Singapore.
We
have
some
changes,
probably
go
over
them
a
bit
copy,
so
we
are
creating
a
new
TLV
in
the
NSA
in
the
note
state
and
attribute
object
within
the
metric
container.
This
is
used
basically
for
enabling
PR
a
packet,
replication
and
elimination.
So,
since
the
last
version
we
did
some
editorial
changes.
We
improved
some
of
the
diagrams
and
some
working
SR
model.
We
specified
how
some
of
the
vacuum
see,
fields
should
be
used
and
we
also
created
somewhere
shorter
dissectors
for
this
for
this
field.
E
So
the
here
is
that
we
were
trying
to
as
much
as
possible
achieve
some
determinism
to
get
some
reliable
communication
and
some
low
jitter
performance
as
part
of
that.
So
the
idea
is
to
use
packet
replication
illumination.
So
you
replicate
the
packets
to
multiple
parents.
You
make
sure
that
whenever
they
arrive
at
a
common
node
they
get
eliminated.
So
you
don't
get
a
storm-off
packet
and
they
also
use
promiscuous
overhearing
to
increase
the
number
of
packets
they
received.
E
E
So
we
need
to
extend
the
DAO
messages
with
some
information
to
allow
child
nodes
to
select
an
alternative
parent
as
best
as
possible
for
our
purposes
during
that,
for
that
parent
selection,
our
draft
enables
it
to
function
at
all,
and
specifically,
the
idea
is
to
allow
selecting
an
alternative
parent
which
has
a
common
ancestor
with
the
child
parent.
So,
in
this
diagram
the
idea
is
to
be
able
to
select
B,
which
has
a
common
ancestor
with
a
so
then
B
needs
to
be
selected
for
that
purpose.
E
Also,
very
quick
example
afterwards,
so
we
have
an
again
of
the
DAO
packet
with
the
dark
matter
container
and
in
it
we
have
the
MSA
object
and
within
the
NSA
object
will
have
a
new
request
for
a
new
TLV,
an
optional
TLV,
which
describes
the
information
which
has
information
used
for
PRA,
and
this
information
is
basically
a
number
of
addresses
of
ipv6
addresses
for
the
parents
of
a
node.
Now.
E
E
If
you
have
this
network-
and
you
have,
this
is
the
preferred
parent
of
s
and
D
is
the
preferred
parent
of
a,
and
this
is
the
parent
set
of
s,
and
this
is
the
parent
set
of
a
and
you
have
some
intersection
here.
This
is
a
parent
set
of
me.
You
want
to
select
the
sorry,
the
the
B
node
as
the
alternative
parent
for
s,
since
there
is
an
intersection
here.
So
since
B
is
in
the
is
in
the
parent
set
of
both
a
and
B,
you
select
B
as
an
alternative
parent.
E
Now
one
issue
you
have
is
that,
unfortunately,
sending
all
that
data
is
pretty
heavyweight
like
16
bytes
per
ipv6
address.
One
thing
that
we
have
done
is
to
remove
half
of
that
to
somewhat
compress
it,
but
again,
that's
not
ideal,
and
we
also
have
some
ideas.
We
we
are
thinking
about
using
this,
the
same
information
not
for
keeping
the
route
of
the
replicas
close
to
the
original
path,
but
intentionally
moving
away
from
the
original
path
to
get
some
extra
diversity,
both
for
just
the
preferred
parent.
E
H
Michael
Richardson
to
compress
your
v6
addresses
code.
That's
probably
already
present
would
be
in
the
sixth
orh
header,
which
basically
has
a
list
of
our
addresses
and
comprends
since
they're
similar.
It
does
a
very
good
job
of
compressing
them
into
maybe
one
or
two
bytes,
each.
That
would
be
ideal
and
the
codes
already
probably
present
on
most
systems,
so
I'll.
I
E
E
B
E
So
that's
obviously
an
issue
depending
on
what
you
want
to
achieve.
I
don't
know
if
you
can
necessarily
do
both
at
the
same
time,
but
so
what
we
use
at
the
moment
is
use
a
normal
metric
like
ETA,
X
or
whatever,
and
use
our
constraint
just
to
constrain
the
alternative
parent.
So
it
it
can.
It
can
work
with
other
metrics,
pretty
okay,
that's
not
a
problem
in
itself.
E
J
The
objective
functions
that
usually
we
present
at
the
ITF
are
very
simple,
like
use
one
metric,
but
we
expose
a
lot
more
metrics
and
now
we
put
more
and
more
stuff
and
real
world
objective
function
may
be
more
intelligent
than
that.
It's
an
objective
function
can
be
a
piece
of
logic.
It's
meant
to
be
a
piece
of
logic
which
ties
a
number
of
metrics
together
and
the
way
it's
tied
together,
maybe
Oh.
First,
you
look
at
this
because
that's
my
most
important
concerns
and
the
other
things
maybe
tiebreaker
something
in
these
guys.
J
You
really
want
to
build
two
non
congruent
paths
with
the
capability
to
do,
but
back
replication
and
elimination
in
the
middle
of
your
network,
because
network
are
so
lousy
and
you
really
want
your
data
to
go
through
in
the
definite
definite
time.
So
that's
your
priority.
Now,
if
you
can
do
that,
and
if
you
see
multiple
parents
which
would
fit
then
all
of
a
sudden,
your
whole
life
may
be
taking
more
ideas
from
some
of
this
component
works
that
we
have
used
up
the
Essbase,
TTX
or
use
the
load.
J
Balancing
that
we
just
saw
I
mean
that's.
How
it's
it's
rebound
together?
It's
the
fact
that
your
objective
function
will
not
be
one
of
those.
You
know
simple
apps
that
the
IDF
has
produced,
but
one
which
really
does
what
you
want.
So
if
you
have
an
industrial
consortium
which
really
cares
about
deterministic
properties
of
closer
to
deterministic
properties
in
this
network,
you
will
want
this
and
you
will
try
to
do
some
traffic
balancing
if
that's
possible.
J
J
G
Hello
Java
from
Huawei
technologists,
so
I
am
going
to
present
ripple
observations.
So
basically,
these
are
just
observations,
no
solutions
here.
These
are
some
of
the
issues
that
we
found
during
our
solutions:
analysis,
design,
deployment,
implementation
deployment
pilot.
Actually,
so
we
we
ended
up
having
some
sort
of
solutions
to
some
of
these
problems,
but
we
definitely
believe
that
those
solutions
are
not
optimal,
definitely
not
best
in
all
the
cases,
and
we
wanted
to
bring
these
problems
to
working
group
to
check
whether
these
make
sense
and
if
we
are
not
missing
a
major
point
here.
G
So
that
is
what
we
wanted
to
check
you.
So
our
our
deployment,
our
pilot,
was
primarily
towards
smart
meter
networks
where
we
have
thousand
know
the
cardinality
with
sixteen
hops,
which
is
quite
big,
and
we
have
storing,
as
well
as
non
storing
mode
of
operations
involved.
Having
said
that,
most
of
the
problems
that
are
listed
here
are
related
to
storing
mode
of,
but
we
definitely
believe
that
some
of
these
problems
can
be
solved
in
a
better
way.
Okay,
so
first
problem.
G
This
is
all
of
the
major
problem
that
we
have
faced
is
is
how
to
handle
the
DTS
in
increment.
This
is
non-trivial,
especially
in
storing
mode
actually
non,
storing
mode
of
operation.
This
is
pretty
easy
to
take
care
of
in
storing
mode.
How
do
you
increment
the
DTS
L,
so
the
the
decisions
that
you
make
that
can
impact
the
downstream
route
availability?
G
So
this
is
the
first
implementers
deployer
dilemma
that
we
we
had
so,
in
which
case
should
the
DTS
and
B
increment?
So
DTS
n
is
the
number
which
is
it's
a
sequence
number,
which
is
part
of
the
DAO
message,
which
essentially
is
tells
you
which
tells
now
the
child
nodes
in
what
errors,
whether
it
should
send
the
DAO
message
or
not.
Now
there
are
two
problems
here
now.
There
is
currently
no
way
for
the
target
node
to
know
whether
the
DA
was
actually
reached.
G
The
border
router,
the
end-to-end
path,
has
been
established,
so
it
needs
to
so
the
current
network.
The
current
mechanism
has
to
have
enough
redundancy
now
redundancy
so
as
to
make
sure
that
the
DAO
actually
reaches
the
border
router
and
after
that
it
could.
It
should
ideally
start
the
application
traffic.
It
is
very
important
because
if
you
end
up
starting
the
application
traffic
before
the
routes
are
established
end-to-end,
then
you
end
up
clogging
the
network.
You
end
up
queuing.
G
The
acts
are
not
going
to
come
back
and
you
don't
want
to
ideally
start
your
application
traffic
this
time.
So
essentially,
what
so?
Should
the
DTN
be
incremented
with
every
DI
electrical
timer
now
I
know
for
sure
that
at
least
the
old
implementation
of
quantity
incremented
the
D
TSN
in
every
trickle
timer
time
out,
which
is
bad.
But
then
there
is
no
option.
I
feel
because
the
moment
you
have
a
higher
number
of
might
the
number
of
hops
increase
in
case
of
storing
mode
of
operation.
G
Now,
if
you
don't
increment
DTS
in
in
the
electrical
timer,
then
your
doubt
redundancy
is
too
low
your
you
have
a
high
probability
of
now
not
reaching
the
border
router
and,
of
course,
with
the
increase
in
the
number
of
hops.
The
probability
of
now
success
drops
sharply.
So
this
is
the
something
that
we
had
seen
happening
in
our
networks
and
what
we
had
seen
is
actually
the
network
convergence
time
90%
of
the
nodes
gets
joined,
but
the
remaining
10%
of
the
nodes.
G
G
So
that
is
what
we
have
seen,
and
so
is
there
any
better
to
operate?
This
storing
mode
is
the
mode
of
operation
that
is
considered
here
in
non
storing
mode
of
operation.
Again,
this
is
not
a
problem
now.
This
again
is
one
important
point
and
it
has
so
Dow
AK.
This
has
some
relevance
to
the
previous
point
discussed.
The
DT
is
an
increment.
If
you
end
up
handling
the
Dow
back
properly,
then
it
will
solve
good
number
of
issues.
That
is
what
my
opinion
is.
G
There
had
been
discussion
on
the
mailing
list
in
2015
regarding
this,
and
there
are
implementations,
for
example,
with
the
current
ripple
spec.
There
are
multiple
interpretations
possible
whether
the
X
should
be
sent
hop-by-hop
or
it
should
be
sent
end
to
end,
and
there
is
no
way
that
this
simple
I
mean
once
currently
riot
implements
hop
by
hop
acknowledgment
acknowledgment.
The
older
version
of
quantity
implemented
hop
by
hop
acknowledgment,
but
recently
quanta.
He
found
that
we
have
they.
They
they
need
in
to
end
acknowledgement
mechanism,
because
the
primary
reason
is
the
target
node.
Without
this
mechanism.
G
The
target
node
won't
be
aware
that
it
has
reached
the
border
okay.
So
how
do
you
should
then
implement
or
interpret
this
particular
this
particular
scheme
in
in
the
ripple
specification?
There
are
pros
and
cons
for
each
of
this
mechanism.
Hop
by
hop
acknowledgement
is
actually
pretty
easy
to
implement.
There
is
no
state
involved
in
case
of
end-to-end
acknowledgment
either
there
is
some
sort
of
state
involved
in
the
routing
table
or
there
is
some
sort
of
overhead
involved
in
the
network
controller
and
network
messaging,
so
it
has
its
own
pros
and
cons.
G
We
eventually
ended
up
implementing
this
into
an
acknowledgment
one
in
it,
but
in
a
different
way,
without
that
we
couldn't
get
our
convergence
time
in
any
good
shape.
So
the
other
problem
here
is
how
do
we
handle
aggregated
targets
in
case
of
dower,
so
the
our
work
is
for
a
Down
message
and
not
for
individual
targets,
and
now
message
can
actually
contain
multiple
targets.
So
how
do
you
acknowledgment?
How
do
you
acknowledge
a
particular
die
which
has
multiple
targets
and
once
a
subset
of
the
targets
fail?
This
is
a
problem.
G
G
If
so,
another
problem
is
ripple
is
not
clear
on
how
to
handle
our
aggregated
targets.
It
certainly
allows
it.
There
is
thus
definite
wording
in
the
specification
which
allows
aggregated
target,
but
it
does
not
have
failure
handling
for
it.
This
has
been
discussed
on
the
mailing
list
in
2015
as
well,
and
I
feel
this
is
important
to
be
handled.
G
If
you
see
the
two
most
popular
implementation,
koa
riot
and
quantity,
does
it
in
a
different
way
today
it
is
impossible
to
get
an
any
sort
of
interoperability
possible
between
these
two
implementations
at
multiple
hops.
What
we
have
seen
is
right
sends
aggregated
targets,
quantitate
doesn't
handle
it,
the
network
doesn't
never
gets
formed,
so
the
way
I
have
been
experimented
with
is
I
have
a
border
router
and
a
few
of
the
notes
which
are
counting
here,
another
ad,
but
this
will
have
a
problem
only
at
multiple
hops.
G
If
you
have
a
smaller
network,
you
are
just
doing
an
interrupt
at
a
very
small
size
on
a
table
with
all
the
nodes
speaking
to
border
route,
but
directly
you
are
not
going
to
have
any
problems.
The
moment
you
try
to
scale
it
try
to
achieve
performance
at
a
bigger
scale.
That's
when
all
these
problems
will
start
creeping
in
so
now
AK
is
important.
I've
already
told
talked
about
it.
You
know,
because
it
is
important
to
know
when
the
end-to-end
path
is
established.
G
It
is
important
for
the
application
to
know
when
it
should
start
its
traffic
in
case
of
hop-by-hop.
The
Dower
can
fail
or
buy
up.
Acknowledgement
is
actually
not
much
useful
in
my
opinion,
because
you
have
link
layer
acknowledgement
as
well,
so
in
most
of
the
cases
most
of
the
implementation,
disable
it
by
d4.
So
I
don't
know
if
anyone
uses
hop
by
hop
acknowledgement
mechanism
to
achieve
anything
specific
right
now.
G
So
there
is
one
more
alternate
behavior
that
we
have
implemented.
Basically,
what
we
have
done
is
the
DAO
goes
like
this,
and
the
border
router
eventually
sends
or
the
root
route
eventually
sends
an
acknowledgement
directly
using
a
global
using
the
group
elect
because
the
downstream
path
is
already
established.
So
this
way
you
can
have
least
overhead
in
terms
of
control
traffic.
You
don't
have
to
have
any
additional
routing
state
on
6l,
arse,
and
but
the
problem
here
is
you
can't
handle
aggregated
acts.
G
G
The
lot
of
discussion
that
has
happened
already,
the
next
point
I'm
going
to
discuss
I'm,
not
sure
I,
would
really
like
to
have
some
feedback
here,
and
this
is
something
that
we
have
faced
so
in
case
of
storing
mode
of
operation
ripple
has
some
state
information
to
be
maintained
across
node
reboots
in
case
of
I/o
devices.
The
flash
endurance
is
a
big
problem.
G
If
you
end
up
having
your
network
protocol
flashing,
something
to
the
persistent
storage
every
now
and
then
it's
not
acceptable.
At
least
this
is
what
our
solutions
requirement
team
told
us.
It's
not
acceptable,
so
you
have
to
handle
it
in
some
other
way,
so
I'm
not
sure.
If
anyone
has
handled
this
particular
problem,
current
implementations,
open
source
implementations,
don't
handle
it.
They
expect
that
there
is
some
persistent
storage,
but
they
don't
handle
it
by
itself
and
we
feel
again.
This
is
important
again.
This
is
more
impacting,
especially
in
case
of
storing
mode
of
operation.
G
H
G
C
G
H
Course,
efforts,
yes,
but
of
course
it
now
you
still
have
Network
battery
to
tell
you
to
tell
the
system
that
that
you're
dying,
because
your
Flash
is
already
at
least
which
is
better
than
the
other
way
right,
yeah
but
I.
H
The
reason
I'm
asking
is
it
is
because
there's
other
things
like
a
SNS
and
in
six
station
other
stuff
and
network
keys
that
that
you
do
want
to
write
to,
but
you
don't
need
to
write
even
daily
to
the
system's,
but
there
are
some
other
things
that
the
network
stack
needs
to
keep
track
of.
I.
Think
so
that's
why
I
wanted
to
know
if
there
was
some
threshold,
that
is
a
pain
for
them
and
then
because
your
other
slides
were
how
often
do
we
increment
the
DTS
ed
well.
K
G
I
G
Sorry
so
that
you're
talking
about
right
amplification
where
you
change
the
location
of
the
flash
so
that
it
gets
leveled
across
multiple
listing,
so
we
I
have
put
some
numbers
in
the
draft.
Please
check
you
know
if
those
make
sense.
Okay,
I
really
want
those
to
be
reviewed
and
I
would
really
so
I've
put
some
specific
information,
considering
the
write
amplification,
considering
their
node
endurance
level,
considering
MLC
TLC
flashes
and
then
provided
some
data
there.
G
So
most
of
the
most
of
the
IOT
devices
implements
CC
2,
5,
3,
8,
2,
CC,
2,
6
5
0,
for
example,
implements
some
cheaper
flashes
and
it's
a
big
problem
with
endurance,
0,
6,
sellers
and
particularly,
are
more
impacted
water
out.
We
don't
care
mostly
because
yeah.
We
assume
that
it's,
it
has
some
sort
of
other
mechanisms
to
handle
it
okay.
G
So
this
is
again
a
hot
topic.
It's
already
been
a
handle,
I
think
other
works
in
progress,
and
this
is
definitely
impacting
the
overall
solution.
Implementation
for
us
basically
handling
resource
and
ability
in
terms
of
neighbor
cash
table
and
routing
table.
How
do
you
handle
what
happens
with
the
neighbor
cash
entry?
Goes.
Full
there
is
no
signaling,
currently
I.
G
Think
part
of
the
work
is
being
done
by
in
the
enrollment
as
part
of
the
enrollment
draft,
but
the
certain
scenarios
which
I
still
feel
may
not
be
the
draft,
might
not
be
still
able
to
handle
it.
For
example,
in
in
case
of
this,
let's
say,
for
example,
therefore
routing
table
entries
on
n1
and
n2.
Eventually,
there
is
only
a
single
like
6lr
here
so
or
rather
three
routing
table
entries.
So
how
do
you
handle?
How
do
you
signal
it?
G
So
m3
has
only
one
preferred
parent
so
either
all
these
four
entries
will
go
through
this
n1
or
n2,
and
both
of
them
have
out
think.
So.
How
is
this
going
to
be
handled
in?
So
there
are
some
scenarios
which
are
really
tricky,
especially
when
it
in
context
to
handle
degree,
social
mobility-
and
this
has
been
a
major
major
problem
for
us.
This
directly
impacts
the
network
convergence
time.
G
This
directly
impacts
the
packet
delivery
rate
for
us
and
we
have
an
implementation
which
tried
to
solve
this
issue
and
we
have
not
ended
up
anywhere
as
of
now,
but
we
have
a
lot
of
observations
and
I
feel
if
there
is
some,
if
other
people
have
some
other
alternate
solutions,
we'll
definitely
like
to
understand
and
if
I
mean
move,
definitely
implement.
If
we
someone
else,
has
a
solution
towards
this
all
right.
That's
all!
Thank
you.
Precious
peace.
H
Michael
Richardson
again,
this
is
wonderful
work
I
really
would
like
to
adopt.
This
is
clearly
there's
a
bunch
of
updates
your
points
about
do
AK,
Papa,
hot
versus
and
and
I
implemented
end
and
from
from
just
thinking
about
the
problem.
I
said
well
has
to
be,
and
and
or
there's
no
point
but
but
I
hadn't
occurred
to
me
that
the
spec
was
ambiguous
about
that
I
think
I
just
assumed.
H
That
was
the
case
so
clearly,
there's
some
bugs
in
the
spec
and
I
think
you've
identified
those
and
I
think
that
that
the
places
where
my
suggestion
is
that
that
I
haven't
looked
at
your
document,
I'm
sorry
but
I,
think
you
should
split
it
into
ones
that
are
clearly.
There
is
a
problem
here.
It
doesn't
matter
whether
you
recommend
and
end
or
or
hop-by-hop
act.
H
In
fact,
I
think
we
should
adopt
the
document
with
you
know,
a
or
B
and
as
a
working
group
document,
and
then
we
should
make
a
decision
as
the
working
group
as
to
which
one
we're
actually
going
to
compete
if
it
to
a
bunch
of
those
things,
DTS
n,
increment
I,
think
that's
a
bug
in
the
spec
I,
don't
know
what
the
answer
is,
but
it's
clearly
a
bug
in
this
back.
Okay,
so
I
think
those
are
updates.
H
26550
and
I
think
that
would
be
a
really
good
document
to
do
and
I
think
it
should
be
in
some
sense
non-controversial
right
and
then
you
should
have
a
second
document
for
issues
like
this
one
which
honestly
we've
known
about
in
some
sense
from
the
very
beginning.
We
don't
have
an
answer
for
I
think
ultimately,
a
kill,
storing
mode
yeah.
J
H
So
that
may
be
a
very
small
thing,
so
I
think
that's
the
right
answer
to
me:
that's
where
to
go
for
it,
but
the
bugs
I
would
like
to
see
that
just
let's
do
the
document,
let's
adopt
it,
let's
get
it
published
by
the
end
of
the
summer,
okay,
because
I
think
that's
like
non
I
think
it
should
be
non-controversial
once
we
have
the
two
questions
in
front
of
us
at
the
next
meeting.
You
know
a
or
B,
let's
hum
a
or
B
which
one
you
want.
H
J
J
Okay,
so
I
can
give
you
some
history
on
some
of
those
things
and
I
agree
on
the
on
that
fact
that
the
spec
is
that
complete
and
how
you
use
those
things
and
part
of
the
reasons
is:
probably
people
not
didn't
necessarily
agree
on
how
to
use
them.
When
you
wrote
this
back,
so
we
ended
up
leaving
it
open,
so
so
the
next
generation
would
decide
which
is
what's
happening
now
so
I'm
very,
very,
very
happy
to
see
this
so
I
love
your
draft
as
a
prime
statement.
J
J
The
dialog
was
mailed
two
replies
because
you're
thinking
about
radios
right
repos
are
watching
protocol.
We
run
it
on
wires
and
very
high
speed
networks.
Actually,
so
that
work
is
real,
you
don't
have
any
artwork
in
the
initial
design
was
not
end
to
end
I
mean
if
we
want
to
design
it
end
to
end
our
watch.
Let's
do
it
it
doesn't.
We
don't
even
have
to
call
it
that
watch.
We
can
call
it
the
way
we
want
and
we
don't
have
to
stick
to
the
dirac
format.
J
The
deck
is
read
there
when
you
can't
leverage
your
lower
lay
your
thing
for
doing
it
and
and
repo
as
never
of
things
like
that.
Well,
you
say:
okay,
if
we
ever,
if
you
have
a
lower
layer
thing
which,
which
will
do
it
you,
okay,
the
the
idea
was
if
you
have
a
Dao
information,
you
need
to
pass
it
to
your
parrot
and
if
it
doesn't
work,
then
you
need
neck
nourishment
and
you
retry
and
if
doesn't
work,
then
you
have
an
attachment.
You
retry
at
some
point.
J
You
need
to
pass
it
to
death
and
yes,
it
might.
We
have
to
see
the
consequences,
but
we
hoped
it
would
work
after
you,
ultimately,
retrying
to
the
athletes
would
end
up
working.
We
have
to
talk
about
what
happens
when
it
does
not
just
discovering
that
it
did
not.
Doesn't
help
a
lot,
because
you
don't
know
along
the
chain
what's
broken,
so
we
have
to
figure
out
a
solution.
J
It
will
take
a
little
bit
more
than
a
second
to
get
there,
but
just
understand
the
end-to-end
rod,
because
we
are,
the
initial
design
is
just
local
at
lachemann.
It's
not
enter
well,
we
want
something
to
solve
the
end
to
end.
Let's
welcome
it.
It's
not
there.
Ok,
all
right
the
dts.
There
was
initially
something
that
you
would
trigger
either.
If
you
find
the
discrepancy
in
what
you
see
like
you
know,
this
flies
in
the
packets
which
tell
you
there
is
something
wrong
there.
J
I
just
want
to
assert
that
I
have
my
knowledge
of
my
children
is
what
it
is
and
the
idea
was
either
us
just
your
first
half
children
or
you
go
as
you
said
down
through
the
geotag.
It
could
just
be
a
one
heart
with
rush
I
think
the
option
is
Taylor.
Initially,
that
was
the
option.
I
just
don't
remember
what
ended
up
in
the
spec.
We
met
so
many
revised,
but
in
my
mind,
was
either
for
your
first
hot
children
or
for
everybody
one
design
behind
it
was
to
rebuild
the
geotag
without
reforming
it.
J
If
you
change
the
version
number,
you
will
reform
that
you,
like
the
shape
of
the
topology,
will
be
changed
right
because
people
will
hear
her
and
selection.
Now.
Imagine
that
you
do
a
DTS,
M
and
trick
yourself.
You
skip
the
same.
Do
diet,
you
just
repaint
it
right
paint
by
painting
I
mean
we
put
the
addresses
where
they
are
on
the
other,
so
that
was
the
difference
between
the
DTS
n
and
the
version
version
reveals
everything
DTS
end
just
retains
the
existing
structure.
J
One
thing
was
I,
find
the
discrepancy
looking
at
the
data
traffic
I'm,
getting
a
packet
from
somebody
and
I,
don't
know
that
somebody,
okay,
interesting,
do
I
know
on
my
children
right,
that's
one
reason.
Another
reason
is
you
know,
I
think
these
guys
well
the
packet
going
up
when
you
think
should
be
going
down
all
those
things.
So
there
is
something
wrong.
I
may
just
repaint
my
graph
to
see
us
behind
me.
So
these
these
are
the
sorts
of
reason.
J
Also
another
one
would
be
I
want
to
reparent
I
lost
all
my
possible
parents,
and
I
would
like
to
explore
who's
behind
me,
so
I
avoid
them
and
I
try
to
jump
to
somebody
else
with
not
the
idea
just
want
to
reassert
behind
me
just
reassured
that
this
guy's
not
the
enemy.
Let
me
jump
to
it
and
take
the
risk.
You
know
we
can
stretch.
J
We
never
actually
standardized
that,
but
that
was
one
way
I
want
that
not
only
to
refine
the
dad
without
changing
it,
but
also
to
do
it
like
smoothly
like
a
counter
wave.
You
know
away
from
the
bottom
to
the
top.
So
that's
what
it's
for
now
can
we
use
it?
Is
it
usable,
etc?
I'm
just
telling
you
what
it's
for
yeah,
okay
and
then
again
agree
with
everything
about
the
prime
statement,
things
being
and
clear,
I'm
just
telling
you
what
the
tools
are
for
the
arc
was
local
one
hop
yeah.
J
H
Richards,
it
again
Pascal
that
was
wonderful,
I'm,
so
glad
it's
on
tape,
so
the
the
the
III
think
that
you're
in
violent
agreement
with
me
actually
well,
the
the
the
point
is
that
if
there's
in
clarity-
and
you
just
clarified
about
four
things-
okay-
that
clearly
many
of
us
were
unclear
about
okay,
so
clearly
6550
didn't
say
it
well,
it
clearly
enough.
So
that's
what
I'm
saying
all
those
things
you
just
clarified.
H
I
would
like
that
in
a
low-hanging
fruit
update
document,
okay,
and
if
it
turns
out,
though
that,
as
you
said,
we
need
an
end
to
end
thou
act
or
equivalent.
Okay,
then
great,
that
goes
in
the
other
document
of
of
things,
so
so
the
the
the
purpose
of
the
low-hanging
fruit
document
that
we
think
we
can
do
quickly
is
that
nothing
in
it
should
be
controversial
yeah,
because
something
now
you
just
clarified
a
bunch
of
things
which
now
makes
a
bunch
of
things
potentially
completely
non-controversial.
H
If
that's
what
the
the
clarification
ISM,
so
that's
all
I
want
is
so
that
there's
no
further,
you
know
concern
or
whatever
about
what
does
this
mean
or
what
does
this
do?
In
the
this
business
with
the
bottom
up,
that
sounds
great
I,
never
conceived
or
understood
that
from
you
know
the
last
time
I
read
6550,
but
that's
really
I
did
not
know
that.
So
that's
really
important.
That's
why
I'm
trying
to
capture
is,
is
there's
a
bunch
of
knowledge
that
we
haven't
all
come
to
and
we'd.
H
Maybe
we'll
go
back
to
6550
and
go
oh
yeah.
That's
what
that
paragraph
meant.
After
all,
it
was
never
really
clear
to
me
when
I
read
it
I
needed
a
bigger
example
to
explain
it
now
that
I've
implemented
in
Pascal
already
conceived
of
it
ten
years
ago.
So
we
already
it's
gone.
So
that's
what
I
want.
That's
really
what
I
want
is
to
capture
all
that
that
stuff
quickly
thanks.
G
G
B
J
That
was
my
question:
can
you
clarify
these
everything
is
in
there
must
be
captures.
They
be
document
blah
I'm.
Just
we
just
want
to
understand.
What's
the
future,
is
this
document
like
a
prime
statement
document
and
and
put
it
like
that
or
do
you
want
to
evolve
part
of
the
solution
in
the
same
document?
How
do
you
want
to
structure
the
world?
The.
G
B
J
J
B
J
G
I
I
A
B
A
B
We
have
this
discussion
now.
If
you
want
to
introduce
beer
into
ripple,
there
has
been
an
earlier
document
by
kirsten
who
says
how
you
could
do
multi
multi
cast
by
using
beer.
There
will
be
another
presentation
later
on
by
Pascal
to
see
the
different
modes
of
beer
you
can
use
and
also
for
unicast
and
having
all
these
proposals
we
wanted
in
the
end
to
start
and
Design.
Group
is
the
agreement
which
the
working
group
to
look
at
this
problem
further,
how
to
put
beer
in
to
roll
a
repo?
Sorry,
yes,
Carsten!
Please.
I
I
So
basically
the
the
observation
was
that
it
would
be
good
to
have
a
multicast
phone
on
story
mode
and
we
were
thinking
about
good
ways
to
do
sauce
routing
for
Manteca.
So
that
was
a
research
project
that
we
started
some
time
ago
and
we
came
up
with
a
solution
that
we
call
constraint
cast
and
that
actually
yeah
was
a
result
of
a
research
project,
but
not
necessarily
a
draft
that
we
wanted
to
pursue
here
in
the
ITF.
I
Then
in
2014
beer
happened
and
we
saw
oh,
there
are
other
people
who
were
working
on
source
voltage
miracast.
So
it's
not
exactly
sauce
routed
in
vo
necessarily,
it's
maybe
routed
somewhere,
it's
but
yeah.
So
this
draft.
Suddenly
this
research
suddenly
became
interesting
again
and
we
submitted
it
as
as
a
internet
draft
and.
I
Spun
for
a
while
and
became
a
working
document,
but
it
certainly
makes
sense
to
do
a
revisit
of
what's
actually
happening
in
the
working
group
and
see
how
that
is
applicable
to
what
we
are
doing,
but
maybe
also
the
other
direction.
How
what
we
are
doing
actually
can
be
useful
in
in
some
deployments
that
what
via
currently
has
doesn't
fully
address.
So
I
have
presented
this
before
just
to
remind
people
what
this
was
about.
So
sauce
route,
a
multicast
in
non
steering
mode,
the
source
routing
always
comes
from
the
route.
I
You
need
to
put
information
into
the
packet
for
every
forwarding
node
that
sees
the
packet
to
decide
whether
it
has
to
forward
it
or
not.
And
of
course,
one
way
of
doing
that
is
just
listing
the
forwarding
nodes
in
a
sauce
rod,
but
that
becomes
big
very
quickly,
even
with
with
great
six
large
compression
and
so
on.
So
what
we
looked
at
was
what
what
are
efficient
representations
of
lists
of
data
and
since
1970.
Actually
we
know
about
bloom
filters.
I
So
we
can
directly
use
the
occurring
interface
address.
We
don't
need
any
coordination
numbering
whatever,
so
this
is
a
small
incremental
layer
on
ripple
non-selling
mode
and
of
course
the
problem
is
that
bloom
filters
are
probabilistic,
so
we
have
false
positives
for
its
positives,
cause
various
transmission
and
the
spurious
transmission,
of
course
eats
some
energy
somewhere.
I
So
we
have
to
think
about
how
bad
is
that
in
a
specific
situation
and
how
bad
that
is,
of
course,
depends
on
the
properties
of
the
network.
So
there
are
a
lot
of
things
that
you're
into
that.
So,
for
instance,
how
full
are
your
bloom
filters,
and
that
of
course
depends
on
how
many
forwarding
nodes
do
you
have
for
a
particular
multicast
group
and
also
how
many
bits
do
you
actually
allocate
for
that?
I
So
you
have
a
little
bit
of
control
over
that
and
if
you
have
a
very
dense
network
of
course,
you
will
have
more
nodes
that
actually
see
a
multicast
packet.
That
is
not
meant
for
them
and
start
exercising
these
false
positives.
So
the
density
of
the
network
also
has
an
effect
on
that.
So
it's
it's
not
something
where
you
just
write
through
the
three
formulas
on
the
sheet
of
paper
and
understand
what's
going
on,
but
basically
the
spurious
transmission
that
we
are
seeing
here
is
not
doing
any
damage,
except
for
wasting
energy
and
spectrum.
I
I
This
is
just
just
an
example.
So
if
we
allocate
just
64
bits
to
the
filter,
which
is
kind
of
a
minimal
size,
we
can
have
some
some
six
or
eight
forwarding
nodes
and
I'm
talking
about
for
Windows
and
networking
what
leaves
I'm
talking
about
following
out
in
a
magic
house
group
before
we
start
getting
5%
for
its
positives
and
then
this,
of
course,
the
handling
capacity
goes
up.
If
we
allocate
more
bits
and
if
we
accept
30%
false
positives,
which
significant
and
then
we
can
do
a
hundred
forwarding
nodes
in
56
bits.
I
So
there
is
some
some
scaling
possible,
but
this
is
certainly
not
something
that
you
would
want
to
do
for
a
very,
very
big
network,
but
then
numerating
bits,
maybe
also
not
so
great
for
a
very,
very
big
Network.
Okay.
So
that's
the
basic
idea
of
debt,
Draft
Pascal!
You
have
to
explain
what
the
abbreviation!
What
means?
I
G
L
F
L
I
I
M
Right
so
basically,
we've
got
the
the
beer
stuff,
which
is
kind
of
RFC.
Now,
since
you
month
and
that's
basically,
the
architecture
where
you
know
every
bit
in
the
bit
string
indicates
a
leaf
receiver.
M
Encapsulations
MPLS
and
you
know
anything
else,
Ethernet,
so
that's
a
flexible
encapsulation
and
then
we've
just
started
to
adopt
by
the
working
group
the
model
where
every
bit
indicates
an
adjacency.
So
there
would
be,
for
you
know
the
non
storing
mode
where
you
indicate
really
the
past
through
the
network.
M
J
First
guy
here,
I've
got
some
backup
slides.
So
if
you
ask
me,
because
we
have
time,
I
can
always
go
through
them
to
show
you.
Our
beer
works
in
with
Ripple,
with
the
proposal
here.
I
thought
that
most
people
in
the
room
already
know
that
so
I
just
put
that
as
backup
slide,
but
just
jump
in
and
say
please
PLEASE
on
them,
yeah
right
here
or
you
can
visit
the
slides
after
the
meeting
I'll
just
scroll
a
little
bit
and
you
will
find
them.
J
So
what
I
wanted
to
list
right
here
is
that,
basically,
we
have
this,
these
metrics
of
storing
mode
versus
non
storing
modes
and
whether
you
use
the
classic
old
beer
using
bit
by
bit
a
bit
is
one
thing
versus
bloom.
Filters
were,
which
can
be
seen
as
a
a
compression
form,
an
alternate
form
of
expressing
the
bits
which,
which
has
pros
and
cons
that
now
needs
to
be
discussed.
But
what
I'm
trying
to
say
first
is
that
the
bloom
filter
can
apply
without
you're,
doing
storing
mode
or
non
storing
mode.
J
It
doesn't
matter
whether
you
expressed
the
address
or
whatever
else
of
every
harp,
which
is
what's
so
swatting
does
and
T
does
so
non
non.
Storing
and
beauty
are
really
much
in
the
same
corner
here
or
whether
you
want
to
express
the
final
destinations,
which
is
what
normal
beer
does,
which
is
what
our
storing
mode
does.
At
the
end
of
the
day,
you
can
always
express
these
things
as
IP
addresses,
and
you
can
always
compress
those
IP
addresses
into
a
bloom
filter.
So
it's
not
a
matter
of
multicast
or
unicast.
G
I
J
So
I
started
destroyed
much
later
than
then
customs
work,
because
I
just
showed
that
some
meeting
how
Bureau
could
work
for
repo
and
then
the
child
asked
me
to
write,
write
up
something
so
I
wrote
up
something
I
hoped
actually
that
custard
would
join.
So
what
I
did
is
I
try
to
walk
to
write,
something
which
shows
that
you
can
do
all
four
and
try
to
present
this
as
a
production
on
top
of
doing
just
be
available.
That's
how
I
structure
this
document!
J
Now,
if
you,
if
you
express
those,
leaves
as
a
bitmap
so
say
you
have
less
than
200
leaves
in
your
network
a
bitmap
of
256,
give
you
some
elasticity
there
and
you
can
express
everybody.
And
now
the
amount
of
state
that
you
have
to
keep
in
every
individual
node
is
no
more
the
number
of
leaves
in
your
network.
But
it's
the
number
of
children
that
this
node
can
help
right,
which,
which
can
be
controlled,
I
mean
with
ripple.
J
We
yet
don't
have
those
extension
which
was
talked
about
control,
I
mean
ichiran
or
what,
but
that's
something
that
can
be
controlled.
So
all
of
a
sudden,
even
for
smaller
IOT
devices,
thanks
to
beer,
storing
mode
becomes
possible
so
that
that
was
my
my
lead
interest
in
in
using
beer
for
storing
mode
making,
storing
mode
alive
again
or
something
great
again.
J
So,
basically
yeah.
The
main
question
that
that
the
group
has
to
sort
out
is
is
how
we
want
to
structure
this
I
mean.
Could
we
could
have
a
document
here
and
the
code
occupant
sure
we
could
have
a
document
here
and
and
just
something
which
is
which
there
is
how
you
do
be
a
regardless
of
whether
your
choice
you're
talking
about
is
the
next
stop
or
whether
it's
the
the
end
of
the
path
right?
J
We
can
stretch
out
this
work
in
the
number
of
fashions.
So
that's
that's.
One
of
the
questions
we
have
in
front
of
us
and
I
tried
to
actually
document
more
that
aspect
and
the
actual
operation
of
beer
in
in
repo,
but
actually
it
documented
that
as
well,
but
I
kept
I
kept
completely
open.
How
gloom,
because
I
really
thought
that
that
would
be
customs
interest.
J
So
this
is
what
I'm
trying
to
express
here,
keep
in
mind
that
beauty
is
all
about
specifying
bits
for
each
hub
or
each
segment
that
you
want
to
follow.
So
if,
if
you
see,
if
you
look
at
what
Kirsten
Express
is
in
constrain
cast,
it's
it's
pretty
much.
The
same
thing
right,
we
express
the
different
hubs
that
the
packet
has
to
follow
and
then
the
false
positive
or
I
computed,
a
mix
of
two
shares
in
and
I
thought
was
coming
from
the
same,
so
I
followed
this
way,
but
actually
this
was
coming
from
here.
J
M
M
J
For
the
next
step,
right
and
and
and
there
is
a
side
aspect
to
that
about
bloom
filters,
yes,
in
the
one
hand,
this
fits
the
bit
the
bureau
architecture.
This
is
something
novel
we
may
have
to
talk
to
the
bureau
working
group
and
see
if
there
is
a
wider
interest
and
how
that
fits
that
which
I
see
people,
don't
they
are
how
that
fits.
The
the
the
the
bureau
architecture
if
you're
in
general,
is
interesting.
Looking
at
that,
so
all
those
points-
yes
I
mean
Taurus
computed.
I
Yeah
so
I'm
not
sure
we
will
be
using
the
MPLS
encapsulation,
so
maybe
the
the
ripple
form
of
this
is
actually
different
from
other
forms
began.
As
I
said,
we
are
using
the
rank.
So
this
is
the
constraint
cast
proposed
that
pieces
closely
tied
having
ripple
below
so
to
compare
the
the
various
few
things
with
metrics,
which
I
very
much
like
to
do
the
the
bit-by-bit
Ching
and
where
that
doesn't
really
have
to
be.
There
are
other
good
ways
to
compress
it
so.
I
J
G
I
J
Internet
we
had
in
mind
was
leaked
link
that
to
six
dependency,
because
you
actually
block
is
it.
You
actually
register
an
address,
go
to
the
6
lv
r
+
y
lv
r,
as
your
address
can
can
map
it
with
a
bit
and
usually
it's
co-located
with
the
repo
route,
and
you
know
it
has
a
lifetime.
So
it
knows
it's
like
a
lease
now
it
turns
over
all
these
bits
for
this
list
time
it
you
don't
out
and
that's
how
it'll
be
mint
is.
J
That
I
think
all
these
goes
into
Tallis
proposed
work,
which
is
how
efficient
are
the
bits
right.
In
the
one
hand,
you
have
the
numbers
that
you
showed
about
the
the
bloom
filter,
which
says:
okay
with
100,000
256
bits
get
this
30
percent
or
whatever
I
mean
those
are
numbers
of
how
big
the
bloom
filter
has
to
be.
The
bitmap
has
to
be
bigger
than
the
number
of
hosts
because
of
the
churn.
So
then
again,
it's
not
I
mean
you
have
loss.
So
we
have
to
compare
all
those
losses.
Basically,
I
think.
M
What
I
heard
Carson
saying
was:
maybe
what
I
was
thinking
that
the
second
step,
which
is
kind
of
exactly
this,
you
know,
what's
the
control
plane
necessary
to
do
the
management
of
the
address
space
I
mean
the
question
is
a
little
bit
sure
we
do
just
already
start
brainstorming.
At
least
you
know
the
the
rat
Megan
is
not
not
the
exact
protocol
messages
that
we
need
to
add
to
the.
J
Control
plane
is
easy,
fast
is
about
I.
Think
if
I
understand
well,
Caston
and
I
would
agree
with
what
you
said
is
the
hub
thought.
Yes,
I
I
have
200
hosts.
How
many
bits
do
I
want,
because
I
have
true
right:
I
want
to
keep
some
some
buffer
here,
so
I
don't
reuse
a
bit
too
fast
and
I.
Don't
call
that
control
plane.
How
do
you
call
it
well.
G
K
J
I
G
I
Point
for
that
part,
and
what
we
also
could
do
is
increase
the
number
of
rows
in
this
table,
because
we
can
look
at
other
ways
of
compressing.
The
bitmap
and
I
think
that
there
are
some
pretty
good
ways
and
these
pretty
good
ways
may
actually
get
close
to
the
efficiency
of
room.
Fitness,
so
I
think
that
that's
something
that
that's
worth
looking
at
and
are.
J
I
Not
as
big
so
you
can
put
the
design
numbers
into
a
room
fitter
which
probably
doesn't
make
sense,
because
the
IP
address
caches
as
well.
That's
the
exact
number,
so
there's
a
little
bit
of
space
between
them
that
that
could
be
used
up
to
a
probabilistic
compression
which
could
be
used
for
really
not
large
networks.
J
You
only
only
interest
in
that
half
for
the
non
storing
unicast
world
I
still
fail
to
understand
why
you
extended
this
condition
for
the
multicast,
because
as
soon
as
you
build
what
you
build
constraint
cast
and
you
have
the
destination
IP
address
and
you
have
all
the
hops
because
you
learned
them
from
so
you're
doing
non,
storing
mode
unicast.
So
in
that
world
you
have
all
your
dresses,
you
know
existing
network,
and
now
you
know
how
to
get
to
any
IP
address
at
the
leaf.
J
I
J
If
you're
doing,
if
you're
not
doing
blue,
that's
not
a
problem
but
with
bloom,
it
becomes
a
farm
you're
not
doing
bloom.
If
you
send
a
packet
from
down
as
long
as
you
don't
resolve
all
the
bit
down,
you
still
have
to
copy
up,
but
you
can
still
forward
with
the
bits
with
the
bloom
you'd
never
know
so
so.
Okay,
it's
one
thing
to
to
mark
in
our
list
of
protocols
or
capabilities,
the
the
bit-by-bit
allows
you
to
send
a
multicast
packet
from
inside
the
network.
J
J
You
have
lots
of
time?
Ok,
so
so,
actually
we
we
started
discussing
what
I
had
asked
questions
anyway.
So
what
is
the
protocol
to
allocate
a
bit
and
discussed
and
said
I
mean
what
how
many
more
bits
do
we
need?
Then
we
have
addresses.
So
we
can.
We
can
keep.
You
know
this
result
to
use
a
bit
allocation
packet.
Compression
I
mean
it's
not
just
having
the
the
beer,
but
we
also
need
to
transport
a
bit
map
in
the
packets,
so
we
have
actually
started
a
600
draft
on
that.
J
J
J
And
then
we've
got,
we've
got
how
we
do
do
we
and
oh
I've
got
more
hosts
and
really
I
feel
safe
to
endure
with
with
my
bitmap,
what
happens
I
mean:
do
we
make
groups,
do
do
we
I
mean
with
bluem?
You
you've
got
actually
benefit
off.
Okay,
if
I'm
beyond
the
statistics,
I
hope
that
rien
creases
my
false
positive,
which
increases
number
of
packets
in
my
network,
but
I,
still
work
in
a
bit
by
bit.
Network.
If
I
have
more
hosts
than
they
have
bits,
then
some
hosts
won't
be
served.
J
And
for
the
beauty,
then
I
I
know
a
number
of
of
responses
to
that
in
customs
document.
Actually
reading
it
was
very
light,
I
mean
troth.
Could
that's
more
words
in
explaining
always,
but
I
mean
we
have
a
number
of
answers.
What
we
place
as
link
ID
could
probably
be
the
IP
address
of
the
next
stop
because
I'm
not
even
sure.
If
that's
the
next
stop
I
guess
it
is.
J
M
Sorry
totally
clueless,
but
what's
what's
the
mandatory
native
forwarding
plane?
Does
it
kind
of
within
the
domain
have
to
be
an
ipv6
packet
at
the
outer
side?
Or
could
that
actually
be?
You
know
also
a
beer
packet
right,
because
we
have
all
these.
You
know
overhead
right
now
with
you
know
the
ipv6
in
ipv6,
encapsulation
and
so
I
was
wondering
if
we
can
replace
the
the
outer
ipv6
header
with
it
with
a
beer
header.
Yes,.
J
Actually
it's
whatever
we
like,
because
we
have
this
six
lower
edge
compression
mechanism
which
already
compresses
the
IP
9pa
and
dramatically
to
almost
zero
right.
So
it's
only
implicit
somehow
you
derive
it
from
something.
So
at
some
point
we
this
draft
has
it.
We
are
almost
carry
only
the
bits,
but
we
say
how
you
could
reconstruct
the
whole
packet.
J
If
you
really
want
to
show
that
we
stick
to
the
ipv6
architecture
and
to
the
pure
architecture,
there
must
be
a
way
through
all
those
implicit
and
what
to
reconstruct
the
four
IP
packet,
but
we
never
use
it.
We
actually
form
on
the
packet
in
the
compressed
fashion,
meaning
that
the
only
thing
you
will
see
mostly
are
the
bits.
J
J
So
since
we
have
a
little
bit
more
time
again,
I
would
just
show
you
in
a
few
second
just
to
see
if
we
have
reactions
here
and
we
have
new
people,
how
we
thought
the
unique
ass
would
work
and
then,
but
basically
the
multicast
is
pretty
much
the
same.
But
it's
very
simple.
So
you
know
repo
builds
directed
acyclic
graph
to
us
roots
right.
So
so,
in
my
case,
I
have
four
roots
here
and
the
backbone,
so
you
have
four
directed
acyclic
graphs
and
so
I.
J
Just
pick
one
and
I
gave
in
the
tries
every
node
has
formed
an
address.
You
know
your
neighbor
discovery
took
off
six
of
em
d-block
and
in
non
storing
just
pick
the
case
of
non
storing.
It
works
either
way
in
non.
Storing
a
node
device
here.
Well
tell
the
root,
with
a
unicast
and
to
a
message
MF
and
my
parent
is
B.
B
will
tell
the
root
and
B
and
my
parent
is
see.
J
J
J
Well
forget
about
this.
This
is
just
ways
of
allocating
the
bits
that
we
discussed
in
our
earlier
meeting
so
say
we
have.
We
have
allocated
the
bits
okay,
so
each
node
has
one
particular
bit
which
is
mapped
to
you
said
the
node
ID
could
be
just
its
IP
1
IP
address.
If
you'd
use
expand
nd,
you
will
register
an
IP
address
and
for
that
IP
address
you
will
get
a
bit.
That's
one
way
of
doing
it,
and
so
for
each
address
now
we've
got
the
mapping
yet
right,
no
surprise
for
the
beer
people.
J
J
Have
this
bit
here,
not
a
will
be
telling
be
a
I
have
this
bit
here
we
will
keep
the
bitmap
for
always
children
and
only
children.
It
will
aggregate
that
with
an
all
operation,
so
it
will
aggregate
its
children
and
himself.
So
that
will
end
up
with
this
bit.
For
F
this
bit
for
B,
so
it
should
before
a
and
this
this
it's
on
bit
and
that's
what
it
sounds
to
its
parent.
J
So
you
see,
you
see
three
bits
now:
a
F
and
B,
that's
the
result
of
the
or
operation,
and
you
keep
orange
orange
orange
all
the
children
and
maintaining
the
bitmap
for
its
object.
All
of
the
children
right
and-
and
you
end
up
at
the
root
here,
which
has
I
picked
as
many
bits
as
ho.
So
if
you
get
all
the
bits
set
somehow.
J
That's
the
problem
of
the
bitmap,
the
management
right.
What
do
you
do
when
we
can?
You
know
tend
to
overflow
this
and
that's
a
question
to
be
asked:
I
mean
in
beer.
You
can
do
groups
all
right.
You
can
keep
the
sames,
but
you
have
some
gates
to
say
the
group,
but
then,
when
you
send
something
to
different
bits
which
are
different
groups,
you
need
to
send
different
messages.
One
per
group-
so
that's
one
way
of
handling
it.
J
J
J
Let
me
show
you
what
happens
for
a
message
to
from
the
root,
so
the
root
will
look
at
the
one
or
multiple
destination.
That
message
needs
to
read
to
reach
a
unicast
is
just
a
special
case,
and
so
it
just
computes
the
destination
bitmap
as
the
or
of
all
the
destination
that
you
want
to
reach.
So,
for
instance,
you
want
to
know
what
example
I
averaged
you
want
to
reach
to
this
nation.
You
just
create
a
bitmap
of
this
to
destination
and.
J
Now,
as
the
packet
is
forwarded,
the
node
will
look
at
all
its
children
one
by
one,
and
it
will
do
an
end
operation,
the
end
operation
between
the
bitmap
of
that
child
and
the
bitmap
of
my
destination.
If
the
end
is
nonzero,
so
there
is
at
least
one
bit
in
common.
That
means
that
this
path
is
going
to
one
of
the
destination.
J
So
you
have
an
optimization
if
you
find
that
the
bitmaps
are
pretty,
is
pretty
much
on
your
children,
you
just
broadcast
it
are
supposed
to
select
if
unique
as
one
marrow
I
supposed
to
be
here,
you
don't
have
to
reset
the
bits
as
you
fall,
because
it's
a
dyrdek
right,
the
bits
who
can
be
sent
and
there
will
be
useless
from
then
on
it's
an
option.
We
have
to
decide.
J
J
The
interesting
thing-
and
we
discussed
that
last
time
is.
We
can
also
make
that
reliable
kind
of,
because
once
the
packet
reaches
the
destination,
we
can
send
back
an
acknowledgement
and
the
with
a
bitmap
as
well,
and
the
bits
can
be
odd
as
flies
up
and
the
result
when
it
reaches
the
root
is
the
awe
of
everything
so
those
all
the
nodes
which
got
it
so
so
this
descending
with
the
send
bitmap.
The
active
map
gives
you
the
missing.
Bitmap,
you
retry
that
so
that's
an
interesting
side
effect.
J
B
Thank
you
very
much.
Pascal
I
wanted
to
see
if
there's
interest
in
this
work
in
this,
for
this
work
in
the
working
group,
I
mean
there's
already
been
in
considerable
effort
not
to
influence.
You
I
think
it
is
very
interesting,
so
maybe
I
can
have
an
arm
from
the
Virgin
Group
if
they
think
they
will
should
take
on
this
work.
They
publish
beer.
Please
hum
now.
I
B
One
of
the
sinks
that
he
sought
the
way
forward
was
to
start
a
design
team
is
in
select
number
of
people
and
that
they
will
look
at
the
work
analyze.
What
should
be
done
and
come
forward
with
alternatives
if
needed,
tell
them
when
it
involves
circumstances.
It
is
interesting
to
do
this.
Well,
then
Republic
off
it
and
also
to
see
what
kind
of
documents
we
need.
That
was
the
ID
for
an
design
team.
Then
all
the
companies.
B
F
M
M
M
M
B
F
B
B
B
Think
we
haven't
team.
Yes,.
A
B
Then
you
try
to
come
to
a
conclusion,
see
how
the
tiny
design
team
looks
like
and
tell
them
at
the
next
meeting
up.
Maybe
before
already
have
a
document
in
which
you
say
the
different
alternatives
when
it
is
useful
to
use
and
what
kind
of
documents
we're
going
to
look
at
and
possibly
the
IPR
involved
them,
is
that
correct
yeah?
Is
that
clear,
yeah?
Okay,
thank
you
very
much
indeed,
the
most
difficult
subject:
I'm
glad
that
you're
mocking
progressing.
Thank
you.
H
1.5
millimeters
right,
Carsten
1.5
there
we
go
they've,
adjusted
it
yeah,
I
know,
I
could
notice
that
I
could
hear
them,
despite
that.
So,
since
yesterday
these
slides
have
been
updated
so
slightly,
we
had
a
very
productive
discussion
yesterday
in
the
coat
lounge
to
understand
some
of
the
issues.
So
this
sounds
like
nothing
at
all
what
we
talked
about
in
six
dish
yesterday,
that's
a
good
thing.
It
means
we've
improved
just
how
many
people
were
there
yesterday
in
six
dish?
H
Okay,
so
not
not,
not
even
half
so
the
rest
of
you.
This
will
be
a
little
bit
noon,
that's
good!
So
this
is
just
a
little
bit.
What
we're
going
through
this
is
basically
a
set
of
requirements
coming
from
six
Titian
to
roll
to
do
some
things
and
to
do
with
some
specific
problems
that
well,
they
turned
out
to
be
role,
problems
that
we
have
in
six
dish.
H
Okay,
so
they're
not
specific
to
six
dish,
actually
they're
very
much
not
specific
to
success,
but
when
six
dish
we
have
a
specific
way
that
we
wish
to
solve
some
of
the
issues.
So
that's
why
and
we
have
a
weight
we
we
want
some
layer,
two
things
that
we're
gonna
do
so,
which
button
there
we
go.
So
this
is
like
a
complicated
network
that
Pascal
drew
a
while
ago.
We
have
a
blue
Network.
We
have
a
green
network.
H
The
blue
network
has
some
kind
of
a
backbone
happening
and
has
a
number
of
doe
tags
in
in
some
discussion
back
and
forth.
H
So
this
is
where
you
first
join
a
network
and
get
the
keys
and
do
some
things,
and
in
that
case
it's
very
important
that
you
find
out,
which
ones
are
the
green
networks
which
ones
in
the
blue
networks.
Because
if
the
blue
network
is
not
your
mommy,
then
you
should
probably
shouldn't
join
it.
Okay,
but
the
point
is
that
if
you
hear
a
whole
bunch
of
things
one
from
here-
one
from
here
here
here
here-
that's
a
lot
of
a
lot
of
things
that
you
may
have
to
go
through.
So
it'd
be
nice.
H
What
we
would
like
to
have
in
six
station,
we
have
a
way
of
doing
this-
is
to
identify
the
blue
networks
as
being
blue
and
the
green
networks
as
being
green
without
disclosing
anything
to
an
attacker
about
what's
going
on.
So
the
important
thing
is
that
if
it
turns
out
that
you're
supposed
to
join
the
green
network,
it
would
be
nice
if
you
tried
one
blue
network
and
then
put
the
all
the
other
other
announcements
that
you
hear
to
the
bottom
of
your
list
and
tried
the
green
network.
Of
course,
the
green
network
didn't
work.
H
Then
it
might
be
that
the
blue
network,
you
saw
wasn't
route,
wasn't
with
a
malicious
attacker
and
you
should
try
a
different
different
way,
so
you
do
have
to
try
them
all
in
the
end
before
you
can
give
up.
But
as
soon
as
you
succeed
with
one
of
them,
then
you're
you're,
you're
golden
okay.
So
to
that
end
we
are
so
it's
just
that's
it.
So
there's
the
other
aspect.
I
want
to
go
back
to
this,
which
is
that
having
joined
a
network
there's
then
an
issue
the
first
time.
H
Okay
and
so
then
we
have
this
issue
that
we
need
to
do
essentially
doe
tag
selection
before
we
do
as
part
of
parent
selection,
and
that
becomes
a
challenge
because
there
may
be
different
keys
and
there's
other
stuff
like
this,
and
you
can't
hear
the
DAO
is,
and
you
don't
get
all
the
metrics.
So
there's
some
issues,
there's
some
overlap
between
essentially
layer,
2
decisions
and
layer,
3
decisions
and
we're
trying
to
get
the
right
information
to
right
places
so
that
we
can
make
this
thing
first,
and
you
have
some
comment
at
this
point.
I
H
G
H
I
see
where
you're
going
and-
and
that
may
be
an
interesting
addition,
but
I,
don't
think
that
our
problems
problem,
that
we
need
to
keep
secret
the
existence
of
the
blue
network
or
the
green
network,
or
the
fact
that
that
this
guy
and
this
guy
are
part
of
the
same
network.
So
what
you're
would
solve
is
it
would
allow
an
observer
to
not.
It
would
prevent
an
observer
from
noticing
that
blue
over
here
is
the
same
as
blue
over
here.
H
That
is
not
our
objective.
It
is
an
interesting
objective,
but
that's
not
our
objective.
The
objective
here
is
that
this
guy,
who
is
potentially
maybe
an
attacker,
but
there
potentially
is
an
interesting
and
a
non
malicious.
No
that's
say
for
the
moment
would
like
to
avoid
going
through
all
of
these
blue
networks
just
to
discuss
that
he
belongs
in
the
green
Network.
It's
a
head
of
queue
problem.
H
H
Fact,
that's
that's!
What's
on
the
next,
so
a
couple
slides
in
the
future
right.
So
it's
just
it's
just
you!
Don't
we
don't
so
the
other
thing
is
that
in
some
sense,
pan
ID
is
that
thing,
but
pan
ID
has
some
other
other.
It's
too
short,
but
also
it
has
some
other
other
operational
issues
that
issue
so
as
I
said
so
this
node
it
may
be
better.
Particularly.
You
may
find
that
this
part
of
the
network
is
too
congested
and
wishes
to
move
over
here.
H
Okay
and
I'm,
not
at
all
going
to
talk
about
how
the
decision
is
made,
but
rather
how
the
information
about
the
status
of
this
network
is
occurs.
Okay,
so
I
know,
don't
ask
me
questions
about
the
parent
selection,
evolving,
multiple
things,
because
that's
Georgio's
problem,
yeah
right,
okay,
so
I
just
wanted
just
clear.
There's
two
parts
to
this
was
enrollment
and
parent
selection.
H
If
you
saw
the
slide,
we
had
some
dispute
because
I
didn't
really
see
how
parent
selection
fit
into
it
and
I
now
understand
at
least
well
enough
to
update
my
slides,
if
not
well
enough,
to
update
the
document.
Quite
yet
so
clearly,
the
blue
and
green
networks
are
different.
Blue
Network,
in
this
case
minus
three
pan
IDs
and
different
pan
IDs
leads
to
also
two
different.
If
you
different
ipv6
short
addresses.
H
What
was
the
rigid
use?
Transparent?
Note,
painless,
no
use
another
word,
seamless,
seamless
movement
between
the
different
doe
tags.
You
would
have
to
renumber,
as
you
moved
from
one
doe
tag
to
the
other,
and
that's
okay,
that's
that's
like
compromise.
You
may
also
build.
You
can
also
build
this
network
with
all
three
note
three
having
the
same
pan,
ID
and
the
same
network,
but
then
you
get
into
issues
with
synchronizing
the
the
asns
and
things
like
this.
H
If
you're
doing
six
dish
and
cryptographic
reason,
so
you
probably
wind
up-
are
forced
to
have
different
pan
IDs.
If
you
have
overlapping
networks.
If,
on
the
other
hand,
you
you
this
is
not
the
case
you're,
the
green
yard,
you
just
build
a
really
big.
The
green
network-
and
you
just
happen
to
have
multiple
attachments
to
the
root
to
the
core.
That
may
not
be
an
issue.
Okay,
your
your
your
doe
tag.
H
Your
DOTA
dag,
could
extend
up
to
something
here
with
this
really
being
a
parent
and
really
all
being
one
one
doe
tag,
and
you
know
all
one
hand
ID
just
multiple
e
attached,
though
the
highest
levels
happen
to
be
Ethernet
rather
than
54.
That's
that's
a
different
problem,
we're
not
really
specifically
dealing
with
so
in
six
dish.
We've
changed
this
diagram,
probably
12
times
since
yesterday,
or
something
like
that.
So
we
are
gonna,
create
an
information
element
using
the
IETF
allocation.
H
That's
now
available,
it
will
have
something
called
a
proxy
priority
which
is
essentially
determined
by.
If
you
want
to
enroll.
How
likely
is
this
proxy
going
to
be
able
to
help
you
so,
for
instance,
how
many
neighbor
cash
entry
does
it?
Have
we
heard
rales,
you
know
simulation
was
eight
right,
we
know
was
not.
You
was
someone
else
previous
to
that
at
eight
right.
How
many
do
you
want
to
allocate
out
of
your
eight
neighbor
cash
entries
to
an
untrusted
node
that
you
don't
know
who
it
is
to
do
enrollment?
H
The
answer
may
be
one
possibly
to
for
that.
Eight
I,
probably
I,
probably
be
skeptical
in
one.
So
if
you've
already
got
something
going
on
and
it's
it's
continuing,
then
the
proxy
priority
probably
needs
to
be
a
very
undesirable
value
such
that
you
don't
get
any
more
profit
more
nodes.
Trying
to
join
they'll
try
something
through
some
other
other
proxy
other
hand.
You
know
you've
got
a
hundred
twenty
eight
entries.
You
know
you
could
do
ten
percent
to
that
easily
and
you'd
have
a
very
much
better.
H
Our
is
a
bit
that
essentially
allows
us
to
say
that
this
node
is
a
a
router
from
the
six-man
point
of
view
and
would
accept
a
unicast
router
solicitation
and
the
reason
why
you'd
want
to
do
that
is
because
the
alternative
is
that
the
new
node
or
the
leaf
node
in
the
if
the
leaf
node
would
have
to
either.
According
to
this,
the
rules
would
have
to
either
do
a
a
broadcast,
router
solicitation
or
wait
for
a
broadcast
router
advertisement.
Okay,
so
this
says:
hey
you:
can
you
forum
with
my
my
lair
to
address?
H
H
We
have
this
network
ID,
which
used
to
be
16
bytes,
but
is
now
variable
as
of
yesterday,
and
it's
probably
something
like
a
shot.
A
15-inch
shoe
56
hash
of
a
your
prefix,
but
we've
decided
there's
it's
set
at
the
DOE
tag.
Id
and
that's
all.
Anyone
else
needs
to
know
the
DOE
debt
that
the
root
give
me
the
root,
sets
it
and
tells
people
about
it.
H
Finally,
we
have
this
doe
tag,
priority,
which
is
the
relative
importance
of
the
say,
the
three
green
pieces,
okay-
and
that
said
at
the
root
based
upon
some
knowledge
it
has
about
the
traffic
through
its
part.
How
how
congested
is
it
at
the
top?
How
willing
in
is
it
to
accept
a
new
traffic?
That's
not
just
for
enrolling
nodes,
but
also
for
that
note
that
needs
to
jump.
That's
already
enroll
that
needs
to
jump
from
one
network
to
the
other,
because
the
the
et
X's
are
too
poor
for
some
reason.
H
H
H
H
We
agree:
it's
information-
disclosure.
Yes,
absolutely!
Ok!
So
we
have
this
problem
right
at
some
point.
Somebody
has
to
speak
first,
ok
for
people
to
find
for
people
for
note
for
me
right.
So
if
you
want
to
create
a
secret
thing-
and
we
could
do
this
in
a
zero
touch
way,
then
I'm.
All
yours
like
tell
me,
tell
me
how
to
do
it.
I
don't
know
how
to
do
it,
but
but
it
could
be
that
you
know
something
I,
don't
pry
quietly.
Yes,.
N
N
You
know
very
often
the
world
upon
these
things
with
my
company,
you
have
different
that
different,
essentially
doe
tags
that
are
deployed
in
the
same
space
and
a
new
node
has
to
pick
which
one
is
joins,
and
so
without
something
like
this,
it
will
try
and
try
and
try
and
try
and
try
until
it
hits
the
right
network
and
and
that's
a
waste
of
energy.
It's
a
waste
of
time.
N
N
You
know
you
may
join
it
as
Carson
was
saying
you
wait,
you
may
join
the
wrong
one
for
the
wrong
reasons,
but
joining
will
fails.
We
will
try
to
write
one
death
words.
You
know
with
the
proxy
priority
among
the
different
doe
tags.
It's
actually
the
same
prefix,
essentially
which
ones
you
should
prioritize
over
other
ones.
You
don't
want
to
attach
to
a
network,
that's
completely
congested
and
then
have
another
network
right
next
to
it.
That
is
completely
empty
and-
and
you
should
attach
to
that
empty
one,
and
then
to
do
that.
N
J
For
instance,
if
one
vo
dag
has
a
lot
more
devices
and
traffic
than
the
next.
But
when
happen
is,
is
there
will
be
a
lot
of
load
on
the
radio
which
will
create
loss
which
will
force
the
ETX,
which
is
one
of
those
metrics?
We
use
a
lot
to
to
to
go
down
and
as
the
the
ETA
goes
down,
then
the
rank
goes
higher
and
all
of
a
sudden,
the
dog.
Next
to
you,
becomes
more
desirable
because
the
rank
looks
better
there.
J
So
notes
move
like
as
a
bunch
to
the
next
do
DAC
and
guess
what
you
create.
One
of
those
associations
and
that
that's
something
that
we
actually
see
in
the
field,
so
it's
not
just
about
defining
and
do
that
priority.
It's
about
defining
it
well
about
expressing
it's
it's
something
which
mostly
hits
the
root
right.
J
The
load
at
the
root
is
mostly
the
load
of
the
network,
so
we
have
to
carry
that
of
our
repo
from
the
root
down
to
the
devices,
and
we
have
just
carrying
a
metric
is
not
that
hard
right
just
at
this
yell
in
the
DI?
Oh,
but
it's!
The
real
problem
is
this
control
loop,
which
will
stabilize
the
to
do
tags
one
next
to
the
other,
so
that
you
know
that
they
will
end
up
in
a
situation
where
they
are
evenly
load,
unloaded
or
something,
and
it's
even
defining
evenly
load.
J
H
So
so
the
the
last
thing
is
says
that
there
is
a
we
had
a
rank
priority,
which
is
was
really
that
the
toe
tag
rank,
and
then
it
was
pointed
out
to
me
that
the
other
information
element
in
our
in
our
beacon
already
has
that
number
in
it.
So
we
didn't
need
to
repeat
it,
but
it
was
a
news
to
me.
So
I
wasn't
familiar
with
out
of
her,
so
this
just
sort
of
just
describes
a
little
bit
the
things
and
I
used.
The
word
I
said
preference.
H
So
I
didn't
made
a
mistake
again:
I
try
to
consistently
all
I.
Did
it
all
here.
That's
what
I
get
for
doing
it.
It
7:00
a.m.
this
morning,
I
mean
to
say
where
you
see
preference
bright
priority
and
trying
to
consistently
say
priority
where
lower
numbers
are
better.
Okay
and
preference
is
ambiguous
for
many
people,
the
term.
So
we
what's
the
role
part.
H
Okay,
so
I
haven't
updated
this
document
yet,
but
it
has
some
of
this,
so
we
want
to
have
what
I
believe
is
a
new
metric
option,
new
metric
container,
which
would
contain
the
dough
tag,
preference
I'm,
pretty
sure
it
needs
to
be
a
metric
because
it
needs
to
show
up
in
the
DI
and
it
needs
to
be
refreshed,
go
down
the
thing
whether
some
other
objective
functions
would
do
anything
with
it.
H
H
Pascal
said
you
know,
maybe
one
byte
will
be
do
X
are
together
at
one
point,
but
what
we,
what
we've,
what
we've
agreed
on
and
the
end
is
that
it
will
be
one
to
sixteen
bytes
and
we
already
have
a
length
to
deal
with
it
so
that
the
the
overhead
of
making
it
variable
size,
probably
not
significant,
and
that's
my
suggestions,
charge
if
it's
truncated
sha-256
to
the
PIO
that
should
be
set
in
the
route,
but
that
it's
up
to
the
route
to
decide.
It's
not.
No
one
else
calculates
it.
H
So
we
did
this
goals
and
sixth
issue
was
to
write
it
down,
and
a
sixth
issue
will
probably
adopt
this
I
hope
document
to
figure
out
what
to
do
and
the
goal
and
role
is
to
determine
how
the
newly
exposed
metrics
interact
or
derived
from
vio
things.
Most
cases
there
we've
decided
they're,
not
they
were
derived
from
things
that
you'd
find
the
PIO.
For
instance,
at
one
point,
the
network
ID
was
going
to
be
the
DoDEA
ID.
H
So
people
number
of
children
there's
a
lot
of
other
things
and
we
haven't.
We
haven't
expressed
that,
although
the
suggestion
yesterday
was
that
the
dough
Deng
pref
priority
could
literally
be
how
many
children
you
have
in
the
hold
owed,
how
many
members
in
the
hold
oh
dang,
because
as
it
gets
bigger,
probably
the
desirability
of
that
dough
dad
goes
down,
which
is
pretty
much
exactly
what
you
want.
H
Of
course,
you
might
not
want
to
disclose
how
many
children
you
have-
or
you
might
not
want
to
tell
anyone
that
that's
actually
how
you
make
the
metric,
but
you
know
that,
maybe
that's
fine
I,
don't
think
we
need
to
so
we
had
some
discussion
about
whether
or
not
we
need
to
standardize
what
the
values
are,
because
if
you
have
a
dough
day
route,
which
is
from
different
manufacturers,
then
they
may
do
the
calculation
differently,
which
means
that
this,
the
the
values
don't
have
the
same
relative
value.
So
that's
something
to
think
about.
H
J
J
J
Percentage
is
the
wrong
thing.
Worst
thing
you
can
do
right,
because
if
my
to
do
dogs
are
very
different
in
capabilities,
a
percentage
of
10%
left
on
the
left
might
be
one
guy
and
the
percentage
of
10%
on
the
right
might
be
100.
Guy
I
want
to
turn
the
right.
So
so
it's
reading
the
number
of
what's
left
really
but
oh,
and
that
can
be
discussed
by
the
world.
J
What
I'm
saying
is
you're,
taking
a
number
of
metrics
like
number
of
nodes
left
or
something
my
amount
of
bandwidth
left
I
mean
the
observation
of
the
bandwidth
being
used
around
the
roots.
Usually
is
what
you
care
for,
and,
and
maybe
some
other
things
which
come
from
from
the
nodes
right
I
mean
if,
if
all
the
nodes
in
the
network
have
their
neighbor
cache
saturated,
there
is
nothing
that
we
can
do,
even
if
it
could
take
more
notes.
J
So
it
looks
very
much
like
putting
a
number
of
metrics
together
into
a
number
again
like
an
objective
function.
Does
so
yes,
I
agree.
We
have
to
specify
that
at
least
one
objective
function
and-
and
it
would
not
be
an
objective
function
to
compute
Iraq.
It
would
be
an
objective
function,
compute
and.
J
J
H
We
don't
know
the
the
loading
right
now
in
storing
mode.
We
do
not
know
the
loading
of
each
of
the
children
non
storing
mode.
We
do
I
know
that
some
people
have
would
like
to
propagate
that
information
up,
and
so
maybe
that's
a
there's,
a
connection
right
there
immediately
to
doing
things.
Any
other
questions
concerns
any
other
objections
to
this
work.
You
feel
this
is
like
the
just
totally
the
wrong
thing
to
do.
B
H
It's
just
the
DAO
description
of
a
container
okay
one's
a
metric
one's
a
configuration
option.
That's
that
that's
the
the
concrete
bits
on
the
wire
thing.
The
document
has
to
do
its
the
docking.
Okay,
then
the
document,
probably
is
we
just
said-
has
to
say
something
about
the
the
canonical
way
of
calculating
this
okay,
or
at
least
for
the
that
that
there
is
a
and-
and
maybe
even
has
an
AI
n
a
registry
for
the
names
of
these
things,
so
that
we
can
actually
talk
about
them.
Someone
will
write
a
new
document.
H
It
says
a
smarter
way
of
doing
it
at
some
point.
If
that
makes
sense,
and
clearly,
if
someone
writes
a
I,
don't
know
a
yang
module
for
dough
tag
roots,
it
would
have
to
include
the
name
of
this.
This
dough
Dec
priority
objective
function
in
it
so
that
it
could
be
configured,
but
it
doesn't
have
to
be
transmitted
in
any
way.
The
way
that
our
current
objective
function
is
is
is
described
and.
H
B
H
B
H
E
Hello
I'm
Aris
again,
so
this
is
my
first
time
here
and
I
find
all
the
problems
that
are
being
described
very
interesting.
One
thing
I
have
to
notice,
though,
is
a
regard
replicability.
So
a
lot
of
the
problems
you
describe
like
the
problems
with
joining
or
the
balancing
of
the
trees
and
the
churn
I
have
to
say:
I,
don't
know
if
maybe
there's
some
repository
I
don't
know,
but
there
doesn't
seem
an
easy
way
it
to
be
an
easy
way
for
us
to
replicate
these
problems
in
and
be
able
to
work
on
them.
E
So
it's
very
nice
describing
them
that
you
have
real
world
networks
which
display
these
problems,
but
at
the
end
of
the
day,
since
I,
don't
I'm
not
able
to
replicate
the
same
network
in
a
simulation,
I
can't
actually
attack
the
problem
very
well.
So
I
was
wondering
whether
there
would
be
a
way
to
share
some
of
this
information.
E
G
Shadow
I'm
glad
that
you
you're
talking
about
it
so
replication
is
a
big
issue.
Yes,
so
so
we
we
have
been
working
towards
the
simulation
framework
ports,
achieving
something
like
that.
So
it's
that
you
just
for
example,
I
face
some
problem.
I
should
be
able
to
send
you
just
a
configuration
file
and
you
should
be
able
to
produce
that
problem
yeah
exactly.
That
is
a
work
in
progress.
We
call
it
white
framework,
so
we're
working
on
it
and
it
exactly
tries
to
achieve
what
you're
trying
to
describe
you.
Okay,
that's.
H
Whitefield
framework,
that's
cool.
The
other
thing
that
I
was
going
to
say
is
that
I
guess
this
F
Interop
stuff
is,
is
coming
along
to
be
able
to
do
things
remotely,
so
it
would
also
be
I.
Think
really
cool
I'm
totally
with
you
by
the
way
and
we've
been
through
this
over
the
whatever
eight
years
of
the
distance
of
the
role
working
group
night
is
longer
than
that.
Maybe
ten
years
kind
of
repeatedly
where
we've
had
people
say
a
researchers
say
this
doesn't
work.
H
This
doesn't
work
and
here's
all
the
reasons
and,
and
then
the
working
group
has
said
this
is
really
interesting.
How
can
we,
you
know,
replicate
it
so
that
we
know
whether
we've
fixed
it
or
not,
and
then
said?
Oh
I
can't
tell
you
any
of
that
so
well,
maybe
you
shouldn't
have
bothered
you
kind
of
wasted
your
time
right,
because
if
we
can't
fix
the
problem-
and
you
can't
you
can't
talk
about
it-
then
I
I
think
it
almost.
You
shouldn't
even
bother,
especially.
H
E
H
H
So
yeah
so
cool
yeah.
It
actually
does
a
kanay
t'v
build
so
that
the
code
runs
low,
runs
natively,
but
but
it
surrounds
it.
Looking
like
it's
the
target
architecture.
Now
there
are
some.
There
are
some
improvements
to
this,
but
that's
typically
how
you
do
and
that's
the
only
way
to
get
simulation
to
run
quickly.
Okay,
so
so
there
are
some
restrictions
of
what
you
can
do
and
and
what
I
hope
is
that
something
like
F
interrupt
will
will
give
us
a
better
a
platform
on
which
to
run
all
sorts
of
native
firmwares.
H
N
So
yeah
I'm
I
mean
that's.
That's
that's
the
game
right!
There's
a
lot,
a
lot
to
verify,
there's
a
lot
that
you
know
the
changes
all
the
time.
So
there's
two
action.
So
there's
two
project
I'd
like
to
mention
so
whatever
drop
thanks,
Michael
for
mentioning
it.
That
is
aimed
at
ensuring
interoperability
between
between
implementations,
which
is
which
is
one
part
of
the
story.
N
I
think
you're
talking
about
performance
and
replicating
things
like
this,
so
yeah
here
is
working
on
a
simulator,
the
six
dish
simulator
now
the
Python
one
right,
the
Python
one
yeah
yeah
that
was
released
yesterday.
One
zero,
zero
one
gesture
day
and
Maddie
is
working
on
a
web
interface
for
that,
and
so
the
idea
is
that,
of
course,
its
success
at
the
root
at
the
bottom,
but
then
there's
ripple
and
scope
and
co-op
all
that
and
secure
join.
N
A
We
tomorrow
we
are
going
to
have
a
session
as
well
in
the
morning,
and
this
going
to
be
the
agenda
for
tomorrow,
so
from
9:30
to
11:30,
we
have
two
hours
and
just
want
to
present
like
LD
be
charlie
and
the
pascal's
want
to
present
his
proposal
not
leaves
and
then
Raoul
with
the
no
partner
modifications
and
then
as
when
now
p.m.
Pascal
with
a
doubt
projection
troublesome,
and
what
about
this
modification
is
going
to
be
like
a
new
version
in
and
that's
it
for
tomorrow.
So
we
hope
you
can
attend
as
well.
A
B
Personally,
should
welcome
some
feedback
on
the
lengths
of
this
meeting,
because
usually
we
have
much
shorter
meetings
with
very
well
ten
minutes
per
person
and
we
cut
it.
So
we
have
higher
interests.
People
keep
awake
much
better,
but
now
we
have
an
longer
discussions
which
are
which
good
results,
I
must
say,
but
if
people
will
be
kind
enough
to
at
least
tell
me
personally,
if
they
prefer
the
longer
ones
or
the
shorter
ones,
I
will
be
very
happy
to
receive
those
comments.