►
From YouTube: IETF104-RIFT-20190327-0900
Description
RIFT meeting session at IETF104
2019/03/27 0900
https://datatracker.ietf.org/meeting/104/proceedings/
A
D
A
E
F
Okay,
so
very
quick
update
on
the
working
group
status
and
plan
we
have
the
base
protocol
settled
Toni
will
provide
detail
update.
So
in
this
session
we
also
have
two
interoperating
implementations,
except
that
security
envelope
in
the
post,
implementations
and
except
some
teachers
in
the
open
source
implementation
again,
a
detailed
actually
what
we
provide
later.
F
So
our
plan
and
ongoing
activities
are
that
by
now
for
the
best
protocol,
basically
into
a
soaking
time
mode.
We
plan
to
issue
the
last
organ
group
last
call
at
the
next
idea
we're
already
a
requested
security
idea,
early
review,
and
we
are
looking
for
reviewers
for
the
protocol
and
specification
itself.
The
core
team
has
been
working
diligently
on
this
all
the
time
we
do
want
to
get
reviewers
from
outside
the
core
team
so
that
we
have
actual
scrutinize
ation
on
this
particles.
Discussions
are
ongoing.
We
will
have
a
presentation
on
that
as
well.
F
Today,
young
mother
working
is
ongoing.
We
are
still
looking
for
volunteers
working
on
the
applicability
statement
and
threat
analysis
documents,
those
other
the
milestones
already
set
for
this
group
when
he
was
shattered.
There
are
other
work
that,
like
a
party
together,
prefixes
and
SI
extensions,
we
have
drats,
that's
got
splitted
out
of
the
base
spec.
We
have
not
really
had
not
started
serious
work
on
those.
Yet
those
working
on
get
started.
Once
we
settle
the
the
base
protocol.
E
G
G
H
F
D
I
A
I
I
What
happened
in
very
rough
terms,
yeah
three
bump
to
for
pretty
much
the
whole
spec
is
done.
Lots
of
open-source
code
has
been
written.
We
interrupt
the
staff
on
a
regular
basis.
Bruno's
framework
allows
basically
to
plug
any
kind
of
implementation.
There
was
talk
of
some
30
implementation,
but
you
know
talk
is
cheap
code,
this
heart,
and
we
had
this
once
or
sometimes
twice
a
week.
These
meetings
and
zoom
it
works
like
a
charm.
It
just
push
record
and
for
like
an
hour
hour
and
a
half
it
just
share
stuff
and
toss
stuff,
extremely
productive.
I
Normally,
three
four,
maybe
five
people,
some
providers
come
in
just
to
listen
and
like
three
people
are
cranking
the
stuff.
The
whole
aspect
is
on
get
so
you
know
all
the
people
can
go
and
modify
the
stuff.
Basically,
action
items
are
taken
at
the
end
of
the
meeting,
and
then
people
too
can
work
on
the
gate.
Election
verge,
it's
extremely
productive
model,
I
mean
if
you
can
go
and
tackle
a
problem
like
that
in
ITF,
I
think
all
right
so
see
all
green,
very
good.
So
last
time
about
the
third
was
still
yellow.
I
There's
this
little
outlier
little
tail
hanging
off,
which
is
orange
on
purpose,
because
there
are
some
excellent
multicast
ideas
being
taught
around.
We
went
through
a
couple
of
iterations.
It
looked
a
bit
like
a
pimp
idea,
but
we
realized
that
to
build
pimp
ideas,
kind
of
beside
the
point.
If
I'd
you
can
run
pimp
ideas
as
overlay,
but
the
ideas
that
are
being
tossed
around
now,
actually
quite
novel,
revolutionary
I.
Think
Pascal
has
a
press
all
and
we'll
walk
you
through
some
of
the
thinking
I.
I
Alright,
so
rough
statistic
just
to
give
you
an
idea
again,
you
know
how
much
work
is
being
done.
So
we
have
this
core
contributor
list.
Somehow
it
evolved
that
way,
I,
don't
know
why
we
don't
do
it
on
the
rift
list,
but
you
know
that's
the
chairs,
so
we
have
hundreds
of
threats
right
flying
around.
I
F
I
I
H
I
You
know
I
can
copy/paste
this
stuff.
Well,
I
have
some
kind
of
a
folder
and
I
post
this
couple
hundred
things
on
the
email,
and
we
can
always
do
that.
That's
a
big
deal
so,
like
I
said
we
had
about
twice
the
volume
of
you
know
of
commits
on
the
open-source
bunch.
You
know
if
people
started
to
contribute
lots
of
stuff
started
to
get
poured
pulled
in.
I
If
you
also
see
the
size
of
the
patch,
which
is
you
know,
a
very
rough
indicative
when
you
run
just
did
from
last
ITF
to
this
you
know
what
is
the
diff
coming
out?
It's
you
know
like
25k
lines
of
code.
The
last
time
was
about
16.
K
I
expect
this
stuff
to
trail
off
significantly
now
yeah
I
mean
most
of
the
spec
has
been
done
also
in
open
source.
I
But
what
you
see
is
that
the
spec
slowed
down
significantly
right,
so
we
have
like
5
K
diff
lines
compared
to
like
7
K,
deep
lines,
and
last
time
we
had
seven
Mobile's
change
on
the
protocol.
You
know
encoding
at
the
models,
and
now
we
have
three.
Is
it's
really
like
drilling
off?
Of
course,
no
millions
of
ideas,
laughter
dynamics
right,
extremely
open
stuff
is
being
tossed.
Lots
of
stuff
is
being
scrapped.
The
very
good
dynamics
be
kind
of
a
pity
when
it
starts
to
wind
down,
but
that's
the
weapon
no
way
of
the
world.
I
So
what
changed
now?
What
did
we
do
to
actually
make
everything
green?
So
there
was
a
lot
of
discussion
on
the
security
envelope
and
it
led
all
the
way
to
secure
it.
The
models-
and
you
know
talking
to
a
bunch
of
people,
actually
running
fabrics.
What
security
models
looked?
What
would
you?
What
do
you
desire?
Not
what
can
you
have
today
if,
like?
What
would
you
desire?
So
we
have
kind
of
a
security
model
for
the
fabrics
and
you
can
basically
go
up
down
and
scale
and
trade-off
out
of
that.
I
The
security
envelope
went
through
a
couple
of
iterations
because
we
had
to
accommodate
those
security
models,
and
my
ambition
was
actually
to
address
all
the
threats
that
the
ITP,
so
they
cannot
address
for
historical
reasons,
I
think
we
did
a
decent
job
on
that
we
move
this.
We
shorten
a
lot
of
types,
include
sequence,
number
arithmetic
in
that's
kind
of
you
know
minor,
but
still
you
know
something
that
needs
to
be
done
and
has
to
be
done
carefully
to
not
to
break
things.
Some
link
up
abilities
moved
in.
We.
I
I
So
when
they
asked
the
question
we
started
to
tear
the
problem
apart.
Alright,
so
the
security
came
over
nicely
misaligned.
So
I
think
this
is.
This
is
quite
quintessential
and
down
brought
most
of
the
staff,
and
there
was
some
point
all
right:
I
can
oh
yeah
cool
all
right,
so
yeah
I
can't
like
really
a
pointer.
So
so
we
what
we
realize
that
there
is
like
the
convenience
of
zero
touch
provisioning.
It's
actually
a
counter
force
to
security
right.
I
So
if
you
really
want
to
run
a
zero
configuration
fabric,
you
really
don't
get
much
of
a
security
at
all,
because
anything
can
plug
in
and
just
work
with
it.
And
when
you
move
up
the
scale
you
you
know,
you
can
start
to
start
to
fix
things
to
make
sure
that
it
doesn't
show
up
in
funky
places.
So
you
can
say
ok,
this
is
your
land
or
don't
show
up
in
any
other
place.
I
Intensive
and
integrity
of
the
fabric
goes
up
all
right,
whereas
if
you
go
down
it
becomes
more
more
convenient
and
zero
configuration.
You
know,
there's
literally
no
rift
configuration
everything
just
comes
up
when
you
plug
it
in
or
you
know,
when
it's
miss
Moorea,
it
doesn't
come
up
all
right.
So
when
we
looked
at
so
there
is
actually
already
a
section
which
is
kind
of
a
threat
analysis
in
Reverse.
We
basically
describe
all
the
attacks
and
how
they
are
mitigated.
I
So
maybe
that
can
be
just
ripped
out
and
didn't
a
new
clothes
of
the
necessary
and
no
threat,
and
now
it
is
draft
and
what
we
do
more
above.
The
traditional
routing
protocols
is
that
we
protect
a
lifetime
and
I
talked
about
the
stuff
which
was
kind
of
the
last
threat,
which
is
open
and
iGPS
I.
Think
you
know
we
published
something
Ania
sighs,
but
those
are
more
like
band-aids,
you
know
making
it
harder
to
attack,
but
not
really
preventing
the
attack.
We
have
nonce
exchanges.
I
Adjacencies,
which
basically
prevents
any
kind
of
replay
attacks
all
right
when
you
read
the
spec,
we
have
been
very
pragmatic
how
we
go
about
the
nonsense,
so
we
live
open
a
window.
Otherwise
you
have
basically
to
sign
every
single
packet.
It's
just
you
know
the
load
is
is
excessive.
We
do
not
encrypt,
so
we
see
what
a
tease
come
back
from.
We
do
not
consider
that
contingency
allottee
is
a
desirable.
You
know
property
of
a
routing
solution
and
the
cost
is,
you
know,
very
high.
I
We
provide
origin
integrity,
which
means
that
if
people
inject
something
the
network,
they
can
only
do
it
if
they
have
the
correct
private
key
and
we
do
provide
the
integrity
of
the
adjacency.
But
that
does
not
mean
that
we'll
be
able
to
trust
of
chain
of
trust.
Those
are
subtle.
Differences,
I'm
sure
the
security
areas
will
have
discussions
with
that.
I
So
you
could
pass
something
which
is
origin
integrity
through
adjacency,
which
is
not
protected
and
then
put
it
into
protected
adjacency,
which
means
that
you
did
don't
have
a
chain
of
trust
right,
that
we
cannot
guarantee,
and
we
can
talk
why
this
is
not
necessarily
desirable
because
operational,
it's
basically
on
deploying
all
right,
so
we
do
not
provide
contingent
of
confidentiality
and
we
don't
provide
a
chain
of
trust
all
right.
So
now,
of
course
we
do
the
you
know
the
usual
hasta
packet
all
right.
I
I
That
is
the
kernel
of
how
protocols
worked,
is
not
very
amenable
to
modeling
and
I
walked
it
quickly
through
it's
kind
of
interesting,
so
we
stuck
with
magic
just
because
we
have
some
bytes
left,
it's
kind
of
cool,
because
then
the
silicon
can
look
into
it
right,
because
right
now
we
run
on
UDP
any
kind
of
torque.
If
you,
you
know
want
to
build
silicon,
that
knows
this
is
rift,
give
it
priority
whatever
snoop
the
stuff.
There's
no
way
you
can
do
that.
I
I
Al
key
key
envelope
carries
the
major
version,
which
is
very
important,
because
that
tells
you
can
I
even
decode
the
model
when
you
bump
up
the
major
version,
you're
not
running
to
your
EIN
3,
which
means,
if
you,
you,
cannot
decode
the
model.
That
is
very
important.
Otherwise,
basically
you
try
to
decode
you
this.
You
realize
just
breaks
down.
You
have
no
idea,
what's
happening,
corrupt
packet,
something
we
have
an
outer
key
ID,
which
is
basically
a
local
key
on
the
interface.
So
that
way,
I
can
I
can
do
the
interface
integrity
right.
I
Then
you
have
a
finger
print
length,
which
is
basically
the
fingerprint
and
from
there
everything
is
fingerprinted
below
the
fingerprint.
You
have
two
things
that
you
need
to
protect
and
and
the
remote
nonce
the
border
nonsense.
If
people
are
not
security,
aware
does
such
as
random
numbers
that
people
bump
up
regularly
every
couple
of
seconds
on
both
sides
and
that
prevents
replay
attacks.
How
because
this
fingerprint
protects
the
whole
thing.
I
So,
if
you
give
me
a
nonce
which
is
too
far
from
my
local
nonce
or
I,
saw
your
remote,
nonce
and
I
now
I
see
the
package
where
you
send
a
remote
nonce,
which
is
too
far
from
what
you
gave
us
or
remote.
Knowns
I
know
that
someone's
trying
to
play
to
do
a
replay
attack
and
those
noises
don't
have
to
be
particularly
big
because
it's
always
the
combination
of
both
sides.
I
That
has
been
done
before
in
things
like
secure,
pnni
and
so
on,
but
it's
not
that
well-known
United
especial
and
the
routing
protocol.
So
that's
you
know
we
put
the
whole
thing
and
then
we
also
have
to
remain
in
lifetime,
which
is
protected,
so
that's
kind
of
the
new
stuff,
but
it's
taken
out
the
mobile.
The
model
sits
all
the
way
in
the
back,
and
there
is
a
reason
for
that.
I
So
when
you
push
it
over
an
adjacency,
you
can
change
the
lifetime
of
the
tie
very,
very
quickly
very
easily
without
mucking
around
with
this,
you
realize
model
object.
Actually,
you
should
note
mock
with
the
object
and
I'll
tell
you
why
in
a
sec
and
then
what
we
have
is
an
inner
key
idea,
which
is
really
origin
validation,
why
it
is
bigger,
because
this
has
to
be
agreed
on
the
fabric
right.
I
I
have
to
know
that
this
guy
send
it,
but
I
have
to
know
which
key
is
it
uniquely
on
the
whole
fabric
actually
from
the
inner
key
ID
I
know
who
sent
it?
Who
originated
it?
Sorry
right
so
that's
the
originator,
ID
innocence
and
fingerprint
land
and
that
fingerprints,
the
model
object,
which
means
when
I'm
originating
I'm,
taking
this
key,
that
everybody
knows
on
the
ID
and
I
fingerprint
the
serialize
object
now.
I
Thus,
you
realized
object
is
now
being
carried
around
as
a
binary
blob.
So
why?
Because
we
did
also
some
work
to
understand
how
we
extend
this
protocol
right.
So
how
do
you
extend
these
models
this
these
schemas?
And
we
found
that
if
you
run
an
object
through
a
third
deesser,
you
may
very
well
this.
You
realize
something's
reach.
I
You
realize
the
day
in
after
changing-
and
you
may
may
end
up
with
semantically
the
same
thing,
but
a
complete
different
binary
object,
because
your
serialize
err
just
encodes
things
differently,
which
means
that
you
would
lose
the
origin
validation
right.
So,
if
I
receive
that
stuff,
this
you
realize
it's
change
the
lifetime
Rhys
you
realize
I
lost
origin
validation,
because
I
cannot
fingerprint
it
I,
don't
have
to
originate
or
key.
I
That
also
is
very,
very
good
for
so
it
gives
us
backwards.
Compatibility
people
can
stick
optional
elements
and
if
I
DC
realize
and
I
cannot
DC
realize
it-
you
I'm
still
running
that
through
basically
equivalent
to
ISI
is
unknown
TLD
model,
plus
it's
very
good
for
speed
right,
because
I
don't
have
to
reseal,
realize
every
time
sending
out
lifetime
changes
and
so
on
all
right,
Ling
capabilities.
I
Basically
what
we
realize
you
have
to
announcer
that
the
other
side
supports
PFD,
because
you
have
to
know
whether
it
be
of
these
suppose
I
should
to
come
up
or
not,
we
kind
of
missed
that
we
were
exchanging
discriminators.
Then
we
realize
like
what
does
it
mean?
Should
we
send
an
unknown
discriminators
or
we
just
put
the
link
capability,
saying
I
run
the
FTO
I?
Don't,
and
you
know,
with
the
assumption,
then
there
will
be
thing
added
in
the
future
to
it
all
right,
so,
flooding
rules,
yeah
open-source
implementers,
ask
a
lot
of
questions.
I
The
one
thing
that
was
not
very
well
specified
is
when
you
flirt
and
you
have
lifetimes,
you
don't
look
precisely
at
the
lifetimes
right,
because
there's
a
transmission
time,
there's
some
queuing
delay.
So
if
you
become
too
hung
up
on
the
life
times,
you
will
always
say:
oh
those
are
not
the
two
different.
The
ties
always
different
because
the
lifetime
is
mention
is
match.
So
there's
always
this
Fajr
factor
you
say
like
if
these
lifetimes
are
pretty
close
to
each
other.
I
We
actually
fetched
them
for
a
lot
of
reasons,
so
we
start
from
something
from
the
range
and
it
had
to
start
from
a
lower
range
than
the
last
range
and
so
long
you
can
read
that
up,
it's
explained,
but
it
was
kind
of
cute
and
it's
a
in
a
sense.
It's
a
property
of
rift
because
we
only
flood
north
right.
Another
protocol
could
flood
south,
but
we
don't
flood
South
down.
I
I
Of
course,
I
mean
know.
This
stuff
was
like
monkey
testing,
interrupt
stuff,
doing
weird
things,
jumping
up
and
down
on
the
bed,
but
we
saw
cases
where
there's
useless
information
right,
because
the
SPF
and
so
on
were
disregarded,
but
you
still
stuck
up
with
some
stuff
that
looks.
Why
is
it
here?
So
the
fix
was
also
relatively
simple.
We
basically
that
that
when
you
change
your
level
on
gtp,
you
just
have
to
flush
out
all
the
other
ties
and
works
like
a
charm,
but
it's
also
no
unusual
all
right.
I
So
that's
kind
of
flowing
I
think
it's
all
right
now,
not
that
will
definitely
not
work.
Can
you
give
me
the
PowerPoint,
because
I
need
those
pictures.
A
I
Well,
I
don't
have
to
next
slide
yeah.
Okay,
what
did
I
write
the
stuff
down?
I
still
had
something
to
say:
no
I
talked
about
extensibility
and
we
actually
wrote
code
with
a
Bruno
over
a
couple
of
languages.
Couple
of
utilizing
this
realized
to
make
sure
that
we
can
actually
move
in
practical
sense.
We
can
move
the
protocol
forward
with
optional
schema
elements
and
everything
will
work
and
also
can
flooded
through
people
who
don't
understand
it.
The
decoders
will
not
hang
up.
I
You
know
this
kind
of
stuff,
but
I
think
I
talk
to
other
stuff,
all
right
all
right,
so
that
will
be
interesting
discussions
once
they
managed
to
pull
up
the
PowerPoint
and
there's
a
slower
stuff.
So
packet
numbering
I
talked
about
that
stuff.
So
we
actually
have
this
and
we
also
have
throttling
from
the
other
side.
I
So
I
can,
for
example,
see
that
if
the
guy
throws
to
start
with
me
and
I
see
a
lot
of
Miss
ordering
and
losses,
I
can
squelch
on
the
guy
on
the
lie,
saying
like
you're,
sending
too
fast
throttle,
and
that
allows
pretty
much
the
flooding
rates
to
be
adaptive
and
basically
always
run
with
the
maximum.
You
know
flooding
rate,
so
the
flooding
rates
are
on
rift
or
you
know,
like
orders,
an
orders
of
magnitudes
better
than
any
other
protocol
right
now
pipe
tightening.
I
So
we
may
types
much
smaller,
like
sequence,
number
peckin
affected
numbers
like
no
loans
is
where
64-bit
was
used
last
and
we
also
had
to
fit
them
in
a
security
envelope
and
also
that's
where
we
did
to
work
with
the
sequence
number
all
overstaffed,
arithmetic,
I,
put
there
Atlantic
in
the
draft
and
so
on,
and
we
also
advertise
unsolicited
optional
downstream
label
on
the
lies.
So
it's
basically
you
get
in
a
sense.
You
get
to
LDP
for
free,
okay,
because
people
asked
for
it.
I
know
how
they
use
it
and
that's
it.
They're.
I
I
I
I
No
all
right
so
I
try
to
give
you,
like.
You
know,
a
story
which,
of
course,
won't
work
is
too
complex.
So
I
had
one
picture
to
explain
again
how
flood
reduction
in
rift
works,
which
is
kind
of
a
pass
cause
idea,
which
is
you
know,
an
interesting
bastardization
of
the
way
Manette
was
working.
So
every
note
from
below
is
basically
picking
enough
notes
above
it
to
make
sure
that
you
get.
I
You
know
double
coverage
of
people,
even
one
layer
higher
and
that's
all
recursive
and
it's
also
load
balances,
because
each
of
the
southbound
nodes
can
pick
up
a
different
set
of
what
we
call
for
our
repeaters
right.
I
think
you
can
follow
that
I.
Just
take
all
these
people
above
me
and
I.
Look
for
enough
random
people
to
make
sure
that
people
above
them
everybody
gets
two
copies,
but
not
more,
because
that's
where
all
the
flat
overhead
starts,
so
that
both
reduces
and
balances
and
for
actually
five
level
fault.
It's
close.
It's
arguably
also
optimal.
I
When
you
go
higher
it
you
get,
you
do
a
little
bit
too
much,
but
it's
still
I
mean
ridiculous.
We're
talking
extremely
good
numbers,
but
now
an
interesting
problem
starts
when
you
reboot
the
node,
because
you
start
to
advertise
that
you
have
nothing.
So
everybody
tries
to
give
you
everything
because
don't
forget
know
if
everybody
still
holds
everything.
I
It
is
just
they
don't
repeat
to
a
level,
but
if
the
level
up
reboots,
then
everybody
will
try
to
give
them
the
stuff
right,
and
you
could
say:
oh
yeah,
okay,
so
if
you
have
flood
reducer,
if
you
know
the
flood
leader
for
this
type
of
information,
don't
propagate
it,
but
it's
kind
of
fragile
when
start
reboots.
So
the
Incas
reduction
works
like
that.
The
first
time
I
see
a
description
and
it
would
force
me
to
send
the
information
but
I'm,
not
flood
leader,
I,
suppress
it
the
second
time,
I
send
it.
I
So
it
has
a
lot
of
positive
stabilization
properties
right,
which
means
that,
on
the
coming
up,
you
get
an
Incas
reduction.
You
have
all
these
people
spread
and
the
information
is
only
repeated
twice
each
type
of
information
from
different
nodes.
But
if
you're
in
any
kind
of
transients-
and
you
re
request
the
stuff,
the
whole
thing
will
stabilize
because
then
people
have
no
choice
but
send
you
a
copy
which
means
like
this.
The
flood
leaders
somehow
didn't
do
their
job.
The
whole
protocol
would
possibly
positively
self
stabilized.
Then
it's
pretty
much
the
trick.
I
I,
don't
even
dare
to
ask
who
followed
that
okay!
Well,
you
know
there
I'm
sure
the
picture
problem
will
get
solved
and
you
can
look
it
up
in
material
on
my
two
beautiful
cryptic
sketches
and
then
the
stuff
will
make
sense
all
right.
Otherwise,
you
see
me
after
the
session
and
we
chatted
on
paper.
The
PowerPoint.
I
A
I
F
J
J
J
But
we
don't
know
if,
if
the
technology
poorer,
usually
by
default
its
value
3
if
we
lose
every
other
BD
packet,
so
we're
very
close,
we're
there
a
bad
situation
on
the
edge-
and
we
don't
know
about
it
so
to
extend
the
beyond
what
it
can
does
now
is
to
ensure
backward
compatibility
and
extensibility
for
whatever
features
possible.
That
offers
of
this
proposal
haven't
thought
about
and.
I
J
J
So
you
have
here
the
format
in
a
you
know,
64k
view
we
have
the
same
original
BFD
control
message
as
defined
by
58
eighty.
We
have
a
guard
work
which
kind
of
you
know
our
fraud
off,
maybe
extra
caution
to
detect
whether
it's
really
extended
BFD
or
something
weird
and
then
followed
by
t
of
ease
until
these
can
be
contained,
sub
chillies,
so
to
me,
until
they
even
probably
might
be
for
some
other
reason
another
to
be
inside.
J
J
Okay.
Tony
has
an
opinion.
So
that's
again
in
a
discussion.
There
are
some
implementations
that
do
look
at
IP
total
length
and
expect
it
to
be
reflective
to
only
be
FD
control
message,
length
plus
header,
but
some
implementations
don't
so
for
implementations
that
don't
make
this
check.
That
will
work
normally
or
you
basically
get
their
implementation.
That
supports
it.
So
if
there's
a
classical
58
a
b
b
of
the
implementation,
it
will
not
send
final
back,
because
this
negotiation
uses
post
sequence
post
sequence.
J
J
J
I
J
Sarah
Tory
in
capability
negotiation,
so
this
capability
is
a
little
bit
ambiguous
because
I
didn't
want
to
go
into
it,
we'll
get
to
that,
what
how
to
measure
loss
and
delay
and
everything
next
slide.
So,
let's
assume
we
are
conforming
to
extended
B
of
these
implementations
they
agreed
upon
and
we
want
to
do
a
performance
measurement,
so
the
performance
measurement
can
be
realized
in
two
ways:
one
it
could
be
that
extended.
Packets
will
include:
okay,
that's
an
old
slides
and
it's
a
typo.
It's
not
6470
363
74!
J
So
my
apologies,
it's
MPLS
packet
loss
and
delay
measurement.
So
the
idea
is
we.
We
take
these
messages
that
are
already
being
defined
and
implemented
by
many,
and
we
insert
them
in
the
Tobs
in
a
VFD
packet,
so
that
allows
us
to
do
direct
or
infirm
synthetic
loss
measurement,
and
it
allows
us
to
do
delay
measurement
and
use
the
timestamp
formats,
whether
it's
p,
TP
or
ntp
by
sender
and
reflector
independently,
because
there
is
explicit
indication
of
the
timestamp
format
by
the
side
that
feels
it.
J
J
J
J
J
Echo
request
reply
right
or
you
can
use
it
one
way
too
far
end
and
if
you
wish,
if
MTU
changes
for
some
reason
or
you
like
over
a
multi,
hop
BFD
using
so
you
have
a
convergence
underneath
then,
if
you're,
using
in
conjunction
with
the
s
antonius
mode,
then
your
session
will
fail
because
empty
you
change,
so
your
packet
is
too
big.
Then
the
far-end
would
not
receive
it
three
messages
in
a
row.
You
have
federal
detection
again,
it
could
be
used
either
way
as
monitoring
or
as
detection
of
the
failure.
J
Next
steps
will
be
continuing,
adding
details
because
for
direct
mode,
this
extended
BFD
can
be
used
to
fetch
results
from
the
far
end,
because
in
one
way
our
measurements,
other
measurement
results
and
calculated
performance.
Metrics
are
still
sitting
at
the
far
end
and
there
might
be
interest
on
fetching
them.
At
the
same
time.
It
will
definitely
be
interesting
to
do
their
yang
model,
because
then
this
performance
metrics
can
be
just
exported
for
the
yang
well
discuss,
discuss,
discuss,
welcome
comment,
suggestion
in
cooperation.
I
I
I
Yeah
and
means
it's
very
interesting,
interesting
for
this
IP
fabric
Fox
because
know
the
jitter
the
delay
the
stuff
is.
This
is
very
relevant
for
them.
You
know,
is
discussion
of
that
stuff
on
the
periodic
pay
you
know
before
the
bring
up
the
links
on
the
periodic
pays
is
no
monitoring
the
link
quality,
and
so
it's
very
important
for
them.
Yeah,
okay,.
J
F
A
I
Is
good
token
is
good
all
right,
so
we
had
a
hackathon
on
rift.
This
is
not
a
participant
and
Brunel's
ensured
me
that
when
he
took
the
photograph
of
the
gentleman
the
gentleman
signed
away
all
his
rights.
So
he
has
the
waiver
I,
don't
know
if
it
was
taken
somewhere
in
South
America.
The
brewery
is
some
interesting
places.
I
have
the
fun
job
to
take
his
material
and
present
it
I,
don't
even
know
what
is
on
the
neat
thingy.
We
thought
maybe
a
record
a
presentation
or
something.
I
But
you
know,
experience
has
been
poor
with
remote
presentation,
so
we
just
go
with
the
standard
deck.
Okay,
we
did
cows
monkey
testing
on
the
protocol.
First
iterations
I
think
that
will
be
the
future
of
this
august
body.
If
we
want
to
stay
relevant,
we
have
to
learn
how
to
do
modeling
and
we
have
to
learn
how
to
count
monkey
test
modeling
just
to
ensure
the
quality
of
what
we
deliver
it
on
extremely
short
time
scales.
I
I
We
took
the
open-source
implementation
that
largely
chucked
out.
We
generated
some
configuration
for
what
bruno
consider
large
I,
consider
North
Valley,
smallish
data
center
topology
and
then
we're
end
raised.
If
you
bought
a
bunch
of
AWS
instances
and
we
just
started
to
bring
the
stuff
up,
I
explained
you
know.
I
Actually,
maybe
the
other
presentation
explains
how
this
stuff
is
multi
instantiated.
We
basically
started
with
containers
there.
Okay,
but
so
Bruno
went
to
namespaces,
which
have
lower
scalability,
but
it's
very,
very
easy
to
break
in
on
the
Linux
kernel,
topologies
up
stripped
together
and
then,
when
we
brought
the
stuff
up,
which
is
jumping
down
a
little
bit
on
it
using
random
scripts,
and
then
we
basically
started
to
look
east.
The
protocol
converged,
which
actually
on
the
protocol
of
that
complexity,
is
not
a
trivial
thing
to
assure
so
no,
we
ended
up
in
hackathon.
I
I
The
way
we
generate
those
topologies
is
that
there's
a
meta
configuration
describing
a
fabric,
it's
the
same,
so
it
doesn't
allow
you
more
than
like
you
know,
five
fivefold,
its
claws
or
parts
with
uneven
height.
It
will
evolve
that
staff
generates
a
configuration
for
every
router
scripts
to
start
and
stop
because
you
have
to
be
able
to
log
into
different
namespaces
and
then
generate
scales,
monkey
scripts
which
shake
stuff
up
and
down
and
generates
also
the
check
script
right
to
check
actually
to
go
out.
I
All
these
things
gather
all
this
information
reconcile
it
and
figure
out
yeah.
You
have
that
apology
actually
works
right
and
it
does
pinging
back
and
forth,
and
these
kind
of
things
all
right.
So,
like
I
said,
we
generate
these
Carroll
script
that
generates
perturbation.
You
know,
like
brings
things
up
and
down
jumps
up
and
down
we'll
talk
about
all
the
way.
The
next
things
also
right
now,
link
failure
is
node
failures.
So
what
we
observe
you
need,
direct
link
failures
will
be
interesting.
I
Packet
drops
reordering
delays
and,
of
course,
corrupting
packets
about
the
problems
just
corrupting
peckers
randomly
shooting
bytes
doesn't
buy
you
anything
much
because
the
Peugeot
will
not
convert
right.
It
does
the
expect
that.
So
you
have
to
understand
what
the
variables
mean,
a
boundary
condition
which
you
know
like
when
you
do
the
Google
chaos,
monkey
testing
and
software.
That
is
like
oh
methodology,
then
you
repair
all
the
breakage
ease,
and
you
see
whether
you
know
the
protocol
is
an
expected
state
all
right.
So
that's
one
of
the
sequences.
I
You
start
from
the
state
break
clean,
kill
something
break
another
lane
bring
something
restored
like
bah,
bah,
bah,
bah,
right,
green,
red,
green
red
green.
At
the
end,
you
have
a
sequence
of
greens
that
fix
everything
and
you
go.
Ok.
Did
this
thing
survive?
Are
we
in
a
full
colored
state?
Some
example
kills
wrong
again
documents.
I
Videos
all
exceedingly
well
documented
in
our
stuff
breaks,
breaks,
breaks
whatever
fixes
at
the
end,
so
the
check
script
is
actually
topology
aware,
and
it
knows
where
to
look
and
I
show
what
it
looks
at
it:
pings
from
every
leaf
to
a
relief
with
interesting
discussion.
People
start,
of
course,
to
play
with
it
and
they
start
to
ask
question:
you
recognize
that
riff
does
doesn't
do
certain
things
or
expect
certain
deployment
mode
right,
there's
nothing
like
running
code
and
in
an
environment
like
that
which
you
start
to
understand
what
the
hell's
going
on
right.
I
So
we
think
from
out
leaves
whether
all
the
notes,
data
apologizes.
You
stayed
up
whether
the
northbound
default
routes
were
in
the
rip
and
the
FIP
and
the
kernel
all
right,
because
there
is
a
local
rip.
Then
you
get
I
broken
on
the
fifth
based
on
the
route
preferences,
and
then
you
push
it
into
the
Linux
kernel.
So
you
have
to
verify
the
whole
chain
same
thing
for
the
southbound,
there's
always
more
things
in
the
future.
So
that's
a
some
convergence
around
where
the
check
is
being
done.
Right
checks.
I
What
helps
enormously
with
Bruno's
code
is
that
you
can
generate
this
kind
of
SVG
protocol
runs,
so
you
can
basically
slice
out
any
kind
of
protocol.
Interaction
put
them
on
SVG,
and
you
see
all
the
packets
timed
how
to
flying
through
the
network,
and
you
can
flip
things
on/off
and
reorder
and
whatever
not
so.
This
is
actually,
if
you
truly
want
to
understand
how
the
whole
thing
works.
This
is
a
fantastic
educational
tool.
Alright,
so
we
had
we
generated
us
topologies
of
the
Randalls
topologies
and
local
machines,
and
we
did
the
curls
monkey
testing.
I
We
wrote
code
to
know
more
and
more
consistency
checking
because
we
were
looking
manually
for
things
what
we
learned.
So
a
the
open
source
implementation
had
a
couple
of
issues
with
this
thing
shook
out,
for
example,
ipv6
flooding
issue
where
it
is
absolutely
valid
for
if
to
come
up
and
one
side
flats
on
the
ipv4
lots
an
ipv6
if
you're
running
both
address
family
and
they
had
timing
issues
depending
how
he
was
coming
up.
They
ended
up
on
the
wrong,
socket
or
missing
stuff.
I
Of
course,
always
the
shutdown
scenario
and
exceptions
when
you
break
things
stuff
starts,
you
know
to
shake
added
multiple
show
commands
because
you
start
to
inspect
state.
If
things
doesn't
go
well,
especially,
it
doesn't
go
well
with
very
weird
ways:
the
protocol
spec.
We
found
nothing
right,
so
they
in
in
this
sense
that
looks
very
solid
and
that's
the
heck
of
them.
Yeah
I,
don't
think
we
want
it,
but
I
wasn't
even
there.
I
I
I
So
let
me
write
all
that
down.
So
usually
it's
only
github
stuck
around
102
hackathon,
so
which
is
within
basically
or
at
six
months.
We
had
stayed
where
we
have.
You
see
how
much
is
implemented.
It's
chucking
along
to
become
a
complete
rift
implementation
issues.
I
felt
originally
with
just
a
small
subset,
maybe
just
a
server
site.
I
It
was
very
successful
in
improving
the
specification,
no
discussion
there
right
and
I
could
I
give
him
that
I
would
consider
that
a
reference
rest
implementation,
absolutely
I
mean
the
code
is
extremely
good
quality.
It
is
completely
geared
toward
someone
to
understand
and
learn
the
protocol,
not
for
high
performance,
but
for
that,
as
a
reference,
implementation,
beautiful
and
again,
the
emphasis
is
emphasis
in
being
user-friendly,
educational,
very
transparent,
very
debuggable.
I
You
can
look
at
the
stuff
very
easily
and
not
all
performance
are
understandable,
extensive
documentation
and
two
completely
unencumbered,
but
any
kind
of
you
know
roadblocks
like
IPR
or
any
kind
of
vendor.
You
know
nefarious
influences.
It
always
is
good,
so
how
to
get
started.
Just
one
thing
to
get
you
bootstrapped,
when
you
get
into
the
github
torus
installation
start
up
everything
just
nicely
rattled
up
good.
So
what
was
added
so
Bruno
added
quite
a
significant
amount
of
stuff
since
last
time,
ipv6
adjacency.
I
Bizarre
ipv6
multicast
works
actually
until
you
get
to
stop
working
because
needed
for
the
lies
right,
especially
if
you
have
leaned
locals
and
they
overlap
and
whatever
not
so.
You
send
both
IP
v4
and
v6
back.
That's
taking
a
while
to
understand
how
the
spec
is
written
right
and
the
order
does
not
matter
and
how
the
AFS
come
up
does
not
matter,
and
the
implication
that
if
you
support
v6,
you
can
also
throw
it
before
I.
I
Don't
know
precisely
what
he
means,
but
the
lie:
FSM
must
be
and
is
item
210,
which
means
like.
If
you
receive
the
same
packet
yeah,
you
don't
do
it.
State
transition
was
somehow
important
all
right
so
here
he
missed
actually
appoint.
The
v6
rise
imply
that
the
to
notice
V
interlinked
as
v6
forwarding
and
if
he
for
detour,
but
you
can
also
forward
v4
our
v6
only
and
we
didn't
go
through
all
the
complications
of
trying
to
tear
down
address
families.
Once
you
opt
you
up,
even
if
you
stop
sending
he'll.
I
Also,
if
you
want
to
remove
an
address
family
from
a
link,
you
just
have
to
bounce
it.
If
people
are
highly
concerned
which
I
cannot
imagine
in
the
fabrics
about
that,
and
you
know
we
go
an
extent
aspect,
but
this
kind
of
things
are
exceedingly
complicated
and
then
what
took
him
awhile
was
all
of
a
sudden
to
understand
how
the
flooding-
well,
you
know
just
bounce
into
the
stuff
started
to
read
the
spec
and
ask
the
questions
that
you
can
either
float
on
before
or
v6.
I
So
you
have
to
open
sockets
for
both
and
listens,
and
when
do
you
open
the
socket,
you
know
like
detailed
implementation
stuff
and
you
may
end
up
sending
v4
flooding
one
way
and
e6
longing
the
other
way
and
is
perfectly
valid,
and
that
was
what
the
counts
monkey
caught
him,
because
no,
the
stop
was
coming
random
sequences
breaking
coming
up
and
we
ended
up
in
this
a
symmetric
configuration
which
respect
is
perfectly
valid,
so
yeah.
So
here
it
is
ITV
salt.
I
All
these
sixes,
where
is
dependent
on
the
OS
or
assertion
distribution,
right
magic
options
to
make
the
stuff
work.
Reformat
occurs
is
much
easy
respect,
so
he
has
everything
right
now
and
this
kind
of
cute
things
you
start
to
understand
how
or
if
two
works
right.
So
this
is
show
interface
circuits.
You
understand
how
many
circuits
you
actually
have
open
an
interface
to
support
all
the
stuff.
I
I
On
the
protocol
implemented
a
flag
reduction,
so
the
example
algorithm
in
the
draft
is
complex.
It's
kind
of
like
really
cute
doing
all
kind
of
stuff.
The
beauty
of
the
flower
reduction
rift
is
that
everybody
can
run
a
different
algorithm.
It's
completely,
you
know
a
synchronous
distributed
algorithm
that
what
makes
it
so
blazingly
fast
and
stable.
I
So
we
talked
about
the
implications
of
he
really
implemented
the
stuff
that
Pascal
put
in
with
the
fishy
AIDS
and
a
lot
of
clarification
questions
you
actually
know
fix.
They
are
great
notation
couple
of
prices.
It
wasn't
simply
clear
what
that
means
right
and
here's,
for
example,
to
show
flooding
reduction
and
then
is
way
too
small
to
read
it.
It
shows
you
like
the
flood
leaders
and
I,
don't
know
what
it's
all
showing
I
I
think
front
liters
last
election
and
what
yeah
I
I
forgot.
It
is
too
small
to
read.
I
He
implemented
the
whole
SPF,
so
all
that
things
in
once,
you
start
to
write
code
right,
so
he
was
asking
questions
there
as
well,
so
that
shows
you
the
SPF
and
which
destination
fell
out
through
which
system
IDs
next
ups,
south
and
north
split
and
some
stats.
Actually,
this
former
expenses
that
expensive
stats
laters
are
way
around.
You
know
SPF
see
both
directions
and
it's
not
really
SPF.
You
can
actually
cover
all
the
paths
on
the
fabric
because
loop
free,
so
he
did
that.
I
Then
he
implements
the
whole
right
right
because
he
didn't
have
a
local
read,
because
when
you
have
the
solder
out
slope
and
routes,
external
routes
coming
in
local,
a
lot
of
prefixes
are
all
over
the
place.
Then
you
have
to
tiebreak
for
preferences
which
specify
their
local
rep
was
implemented,
including
dcmp,
and
then
you
have
to
show
that
things
which
shows
you
an
hour
all
the
next
starts
and
where
to
stuff
comes
from
from
the
north
and
six
routes
which
are
here
in
C
and
Pete
over
know,
multiple
interfaces
and
Flint
locals.
I
So
that's
all
working
and
then
of
course,
so
sorry,
so
the
rip
holds,
of
course,
all
their
routes
right
for
the
same
prefix.
Even
you
come
from
north
south
external,
it
holds
it
and
then
it's
tie
tie
breaks
it
into
the
local
fit,
which
is
just
the
best
just
the
BET's
best
right.
So
you
can
go
ahead.
Look
at
the
show
forwarding,
which
is
all
the
typos
and
stuff
ready
to
be
pushed
into
the
kernel
and
then,
of
course,
the
kernel
rots.
I
I
I
Balancing
is
not
implemented
yet
extensive
statistics
work
you
to
bring
to
face
note
and
gene.
You
know,
because
when
you
start
to
implement
SPF
load
balancing,
you
know
a
flood
reduction
all
of
a
sudden
you're,
not
just
even
understanding,
what's
going
wrong
without
stats
is
hard,
there's
so
much
stuff
lying
around.
So
the
stats
give
you
you
know
at
this
interface
all
over
side
and
you,
you
retransmitted
all
the
time
your
packet
right
when
to
enter
circ
or
your
packet
right
collapsed.
So
you
see
four
different
packet
types.
I
K
Hi
Alice
woman
net-
deaf,
yes,
we're
looking
for
folks
who
are
interested
in
collaborating
with
the
FRR
community
on
a
c
implementation
of
this.
So
anybody
wants
to
talk
about.
This.
Can
contact
me
directly
after
this
meeting
I'll
be
hanging
around
or
you
can
come
along
at
2
o'clock
this
afternoon
to
the
FRR
meetup.
I
I
Bruno
is
interesting,
fermented
stuff
in
seeds
or
there's.
You
know
they
want
to
build
the
community
with
the
rift
Python.
He
pretty
much.
You
know,
explored
all
the
red
holes,
so
it's
kind
of
a
straight
port
for
high
performance
stuff
and
of
course,
then
that
would
allow
to
add
the
yang
morals-
and
you
know
they
look
alike.
Get
people
engaged
alright.
So
the
current
status
summary
the
adjacencies
are
pretty
much
done.
I,
don't
know.
What's
missing,
I
have
to
quickly
look
as
ATP's
done
since
a
while,
and
that
has
proven
extremely
some
points.
I
They
buggers
were
no
surprises.
What
CDP
accept
is
stale
information
and
there
were
no
large
fabrics,
heavy
load,
flooding,
I,
don't
know.
What's
missing
there
route
calculation
is
half
done
because
of
probably
of
the
bandwidth
balancing
management
interface
to
chain
I,
don't
even
know
what
it
is,
but
it's
basically
there
is
what
has
been
done
between
the
last
and
this
one.
So
adjacencies
I
are
clear.
All
right
so
v6,
adjacency
I
talked
about
the
starboard
is
not
completely
security
envelope,
so
we
still
have
to
implement
the
security
envelope
and
so
Kate.
I
I
Zero
touches
done,
flooding
all
right,
so
the
v6
running
has
been
added
and
the
flood
reduction
is
done.
So
what
is
not
complete,
so
the
efficient
type
propagation
without
the
code,
optimization
positive
desegregation,
so
that's
the
status
as
of
two
days
ago.
Three
days
ago,
Bruno
Motta
has
shown
me
yesterday
night
and
told
me
to
say
that
he
is
like
freak
water
down
with
the
positivity
segregation
beauty
of
symmetric
wellspect
protocols.
Once
you
crank
code,
you
can
go
really
fast,
so
he
already
has
all
the
positivity
segregation
implies.
I
It
isn't
here,
of
course,
an
interesting
question
where
the
spec
is
has
language
of
an
Oracle.
So
if
you
implement
you
understand,
wine
has
to
be
written
that
way,
but
there's
no
surprises
negative
disaggregation,
which
of
course,
is
a
little
bit
of
harder,
not
to
choose.
I
didn't
say
that
we
also
on
the
spec.
We
have
to
update
Pascal
wrote
quite
an
extensive
example
session
section
of
the
negativity
segregation
I
mean
all
seems
to
compute,
but
still
needs
to
be
implemented.
I
That's
of
course
only
of
interest
for
multiplying
fabric,
so
you
know
either
very
low,
radix
fabrics
or
very
large
fabrics.
The
key
value
store
external
ties,
which
of
course,
very
helpful
right.
If
you
redistribute
in
the
protocol
policy
guarded
prefixes
are
jobs
optional
yeah,
a
little
bit
did
indeed
overload
bit,
which
is
important
and
the
clock
comparisons
of
as
mobility,
a
rat
calculation.
I
What's
not
done
yet
east-west
for
worrying,
east-west
links,
positive,
he
largely
has
done
the
fabric,
bandwidth
balance,
which
is
of
course
interesting
label
binding,
is
to
roll,
and
the
multicast
is
something
that
the
sky
will
talk
about.
The
optional
extension
outside
of
the
base
back
most
likely
managed
no
interest
in
implementing
sshd
li
client.
Unless
somebody
really
deploys
the
started
start
to
run,
production
comment,
completion,
yang
data
morals
and
granule
debugging
in
tracing-
I
think
here,
so
I'm
just
selling
himself.
I
This
stuff
trace
is
quite
well
and
is
very
easy
to
debunk,
especially
like
I,
say
hackathon.
For
me
and
someone
else
who
wrote
you
know
a
good
amount
of
code
like
show,
commands
and
status
statistic.
Have
lots
of
stats
have
been
added
and
there
that
also
the
comment
history
to
the
CLI,
I
didn't
even
know
there
and
development
chain
so
likes
other
person,
code
coverage
she's
like
at
eighty,
something
that
bugs
him
and
yeah.
There
is
a
there's,
a
demand
and
this
correction
quite
interesting
from
people
already.
I
So
that
is
interesting
in
the
sense
that
you
have
to
put
into
Wireshark
the
model
compiled
the
model.
Now
they
include
the
model,
but
beside
that
it
looks
like
it
adjuster
by
a
shark
clogging,
and
it's
also
one
of
the
reasons
why
the
unit
people
to
magic,
it's
kind
of
simpler
to
like
what
the
stuff
you
don't
have
to
say,
a
that's
the
portal
running
where
stop
yeah
I
think
it's
it.
I
mentioned
everything.
Yeah
questions,
observations.
I
B
F
I
B
I
Was
very
successful
in
this
respect
and
we're
talking
six
months
pretty
much
from
I
expect
publishing
to
shaking
out
the
spec
wild
implementation,
we're
progressing
basically
being
interrupted
all
the
time,
so
the
monkey
testing
will
be
not
the
interesting
thing.
I
would
say
you
know
to
understand.
How
can
you
tell
your
protocol
spec,
which
is
modeled
and
then
somehow
from
the
model,
understand
what
you
were
supposed
to
shake
to?
Actually,
maybe
we
need
to
analyze
annotate
the
models
and
maybe
needs
like
a
meta
model.
E
K
F
F
F
That
was
thinking
that
we
probably
should
still
use
separate
signaling
for
Marcus
after
all,
so
that
we
can
have
better
load
balancing
for
elephant
rolls
and
nice
flows,
because
if
you
just
used
as
a
few
tree
to
send
traffic
anywhere
everywhere,
then
it
that
works
well
for
for
for
low
volume
flow
rate
flows.
But
if
you
have
we
call
elephant
flows,
then
it's
not
good.
F
In
pimp
either,
if
you
need
you
want
to
receive
a
group
market
traffic
yoo-hoo,
send
started,
joins
towards
a
rendezvous
point
address
that
that
address
were
called.
Our
PA
is
either
on
a
particular
router,
or
it's
just
an
address
on
a
land
that
is
not
bound
to
any
router.
It
could
be
it's
just
a
virtual
address
out
there
and
the
link
that
hosts
that
our
PA
is.
F
F
So
when
you,
when
you
need
to
send
traffic,
you,
the
the
first
hop
router
was
send
the
traffic
upstream
towards
the
RP
a
following
in
that
sub
tree,
and
eventually
it
will
arrive
at
RPI,
routers
and
along
the
way
the
traffic
ORS
so
fork
to
downstream
routers.
Where
you
have
received
the
starchy
joins
one
and
then
for
the
traffic
that
arrived
at
RPI
routers,
they
just
flood
travel
to
each
other.
F
If
you,
if
the
dis
language,
on
the
line
step
on
loopback
interface,
if
you
are
a
pizza
on
loopback
interface,
then
you're
the
only
are
PR
router
there.
So
in
LAN
case
they
just
fly
to
each
other.
That's
fine!
It's
a
it's
a
lamb
and
and
then
eaten
at
the
IP
router.
When
they
receive
the
traffic
on
the
land
that
are
PR,
you
will
just
send
the
traffic
back
down
to
two
receivers
on
the
saw
on
the
downstream
side.
F
F
F
L
F
Right
so
the
the
so,
how
do
we
solve
that?
Is
that
that
Pascal
we'll
talk
about
that
and
then,
instead
of
a
starchy
trees,
we
actually
star
is
debby
established
by
their
channel
star.
She
prefix
trees
that
G
protects
can
be
either
a
Qi
host
or
a
star
to
the
tweak
streams.
So
in
a
star
star
case,
it's
a
it's
a
tree
that
can
be
useful
for
any
group.
F
If
you,
if
you
don't
care
that
the
traffic
goes
where
there
is
actually
no
receiver,
those
can
be
used
for
mice
floats,
and
then
you
can
have
starchy
host
trees.
You
can
use
that
for
elephant
floats.
Then
the
traffic
is
sent
on
only
where
there
are
actual
receivers
and
in
the
middle
you
have
the
starchy
prefix
trees.
It
can
be
useful
because
your
outflows
or
whatever
flows,
basically
the
flows
that
you
can
allow
you
to
go
somewhere
where
there
is
no
receiver,
but
it's
not
going
anywhere.
F
F
You
just
use
a
single
star
start
a
course
in
traffic
everywhere
traffic
and
first
you
can
establish
a
few
more
starchy
prefixes
for
your
uploads
and
eventually,
if
you
realize
that
oh
I
have
one
group
or
a
hundred
group
or
a
thousand
group
that
where
I
have
high
rates
floats,
then
what
I
will
establish
those
trees
accordingly.
Only
when
you,
when
you
need
it.
F
So
the
joins
in
the
p.m.
butter
here
will
be
down
by
the
northbound
pgps
parts.
The
guided
prefix
pies
is
consumed
and
merged,
and
we
originate
every
hop
so
an
instant,
but
the
PGP
is
normally
sent
to
all
your
neighbors
down
north
neighbors
or
South
neighbors.
But
here,
when
it's
used
for
I
established
the
tree,
we
were
send
it
to
just
one
of
the
North's
neighbors
and
that
is
chosen
by
the
Hessian
down
by
the
dancer
a
result.
Neighbor.
F
The
hashing
should
have
this
kept
characteristics
that
you
different
nodes
should
choose
the
same,
a
stream
nodes,
but
you
do
not
cause
the
upstream
node
2
to
have
to
replicate
to
many
many
downstream
notes.
So
let's
say
you
have
at
one
layer
you
have
say
you
have
an
EU,
a
node
that
what's
the
word
okay,
so
because
the
the
in
the
factory,
you
have
many
many
years,
impe
paths,
though
it
could
be
that
some.
F
F
128
knows
at
one
layer
could
be
connected
to
the
same
or
another
100
in
da
nose
in
the
next
up
layer.
So
you
want
to
avoid
that
situation
where
the
upstream,
a
North
neighbor,
is
central,
taking
128
times
to
other
downstream
neighbors.
So,
ideally,
you
will
be
like
that.
Some
of
the
neighboring
there's
a
hatch
to
one
upstream,
some
other
downstream
neighbors,
yet
another
upstream
and
and
but
eventually
they
were
converge
at
the
top.
So
that
way
the
replication
is
more
efficient
and
yet
it's
not
too
demanding
for
for
any
node.
F
So
that's
with
that
those
joins
with
those.
We
establish
the
sub
trees
and
we
also
established
a
corresponding
protein
state
and
that
holding
state
include
the
interface
list
that
in
the
interface
lists,
including
the
northbound
interface,
that
is
produced
by
the
Hessian
and
also
southbound
interface,
on
which
a
joint
has
been
received
from
and
then
traffic
arriving
on.
Any
of
the
interfaces
are
forwarded
out
of
a
other
interface
in
the
list.
The
traffic
received
on
any
other
interfaces
will
be,
will
be
dropped.
That's
the
way,
your
provenza
looking
so
coming
back
to
the
RPL
problems.
F
L
Okay
side,
first,
yet
we
are
here.
I
just
asked
the
chairs
how
much
time
they
gave
me
for
this,
and
they
said
plenty
plenty
is
good,
so
I'm
gonna
use
plenty
first
thing
is
I.
Will
I
will
paraphrase
you
somehow
because
doesn't
hurt,
saying
things
twice,
I
assume
now
we
have
illustration
to
just
say
what
you
just
said
Jeffrey.
So
so
we
started
with
something
we
thought
very
simple.
L
As
Jeffrey
said
in
his
first
slide,
what
we
wanted
to
do
is
basically
reuse,
the
topology
that
we
built
for
the
optimized
flooding
and,
as
you
know
this
this
topology,
that
we
built
for
the
optimized
flooding
as
the
property
that
there
are
pretty
much
redundant
paths
between
any
leaf
and
then
it
off.
So
you
forget,
whatever
is
in
the
middle,
but
between
any
particular
leaf
and
any
particular
type
of
fabric
node.
L
There
is
a
redundant
way
to
get
there,
so
we
thought
a
what
if
we
just
send
the
multicast
packets
north,
whatever
there
are
to
any
type
of
spy,
the
top
of
fabric
node
and
that
particular
top
of
fabric
node
would
reflect
the
the
packet
over
the
redundant
path
that
reaches
any
leaf.
That
was
the
initial
thought.
Without
changing
anything
else,
as
Jeffrey
said,
we
thought
we
could
achieve
an
efficient
multicast
just
using
the
fabric
and
the
state
that
exists
in
the
fabric.
L
If
you
wanted
to
filter
by
star
G,
you
would
just
flood
the
star
G
information
north
only
through
the
flood
repeaters,
which
would
make
it
so
that
only
the
the
traffic
to
to
to
that
particular
group
would
would
flood
through
the
repeaters
only
if
there
is
a
leaf
which
is
actually
listening
for
it.
I
mean
with
this
simple
assumption
we
were
already
there.
We
had
a
redundant
way
to
distribute
the
multicast
traffic
to
groups
or
to
everybody
if
we
wanted
to
as
a
broadcast.
L
There
were
two
issues
with
that.
The
first
issue
was:
we
had
to
make
a
different
routing
decision
for
a
packet
going
north
and
a
packet
going
south
and
that
three,
what
probably
killed
work,
not
killed,
because
everything
is
still
on
the
table.
We
have
not
written
the
text
yet,
but
but
the
reason
why
we
are
opting
out
of
using
the
optimized
flooding
topology
is
to
avoid
having
to
make
this
different
decision.
L
If
the
packets
coming
from
south
of
the
packet
is
going
from
north
right
with
this,
this
pretty
much
is
fast
mode
like
model
that
we
had.
If
you
get
a
packet
from
from
south,
then
you
propagate
it
north
to
any
of
the
parents
north
to
enrich
any
of
the
turf
which
will
reflect
itself
so
that
that
was
one
behavior.
Now,
if
you
get
the
packet
from
the
north,
then
you
would
have
to
to
pass
it
down
to
everybody.
We
elected
you
as
flood
repeater,
so
that's
a
different
behavior.
L
L
If
you
have
a
particular
leaf,
which
is
not
reachable
by
a
particular
type
of
fabric
node
to
fallen
leaf
by
definition,
then,
if
the
multi-cat
packet,
as
it
flows,
a
randomly
north,
reaches
that
particular
type
of
fabric-
and
there
is
a
listener
on
the
particular
felony
for
that
type
of
a
right
note-
that
listener
would
not
get
the
multicast
packet
right.
It's
the
prime
of
the
felony.
So
how
to
under
that?
That
was.
That
was
also
something
which
which
made
the
proposal,
which
looked
initially
very
simple
and
efficient.
L
Having
to
under
this
kind
of
corner
case,
would
we
disaggregate,
or
whatever
else
that
kind
of
looks
painful
and
then
on
the
side?
There
is
this
Miss,
optimization
versus
by
dear,
but
as
you
know,
if
you
have
by
dear
then,
when
you
propagate
north,
you
can
at
each
level
you
can
start
sending
the
pack
itself,
which
means
that
you
get
more
efficient
use
of
your
fabric.
So
we
kind
of
moved
from
that
model
on
to
another
model
whereby
we,
as
Jeffrey
said
again,
we
be
able
to
sub
trees
and
the
sub
trees
are
very
dynamic.
L
L
That
can
amaze
many
reasons
why
we
rebuild
a
tree,
but
you
can
pick
any
of
your
parent
and
change
that
over
time
doesn't
mean
you
have
to
reshuffle
and
deliver,
but
in
a
network
it's
an
individual
decision
by
every
node
and
four
can
pick
them
to
at
some
point
of
time.
It's
basically
beyond
the
orange
tree,
but
five
minutes
later,
if
he
decides
to
it,
can
join
em
for-
and
you
don't
have
to
tell
anybody
above
or
do
anything
so
fancy
at
least
as
long
as
you
don't
advertise
subgroups,
but
just
to
be
on
trees.
L
It's
a
local,
an
efficient
decision,
so
we
can
offset
a
let's
be
do
sub
trees,
which
are
rooted
at
the
set
up
at
the
subt
of
being
what
we
define
as
the
level
right
below
the
top.
So
we
have
to
define
us
that
we
are
beasts
which
is
the
sub
tough
and
we
build
those
trees
and
we
might
decide
to
build
different
trees
that
we
would
access
for
different
groups
etc,
but
I'm
just
describing
what
we
do
for
just
one:
the
first
multi
big
multi
Kasturi.
L
So,
let's,
let's
have
any
node
select
one
parent
and
that
ends
up
building
a
collection
of
non
congruent
trees
which
are
rooted
at
the
cept
off
just
by
individual
decision
of
every
node
of
finding
one
on
strictly
one
part.
So
the
big
prime
with
we
have
this
that
now
is
to
to
interconnect
those
trees
into
a
bigger
tree,
and
now
we
are
back
to
the
pin
by
their
question
of
all
this
big
thing
here
now
this
level
I
have
illustrated
with
this
kind
of
circle
becomes
virtually
the
FPL.
L
That's
the
link
on
which
all
the
small
trees
we
discussed
a
whole
rooted,
but
the
interesting
problem
we
have
with
rift
here
is
that
this
LPL
is
non
broadcast
multi-access.
It's
not
a
token
ring,
it's
not
an
Ethernet,
it's
not
the
air.
You
can't
push
a
packet
there
from
you,
no
packet
coming
from
the
south
swim.
One
say
the
packet
comes
from
there.
If
it
was
an
Ethernet,
you
would
just
broadcast
it
and
then
every
other
n1
into
it's
rare
for
would
get
the
packet
and
be
able
to
flood
itself.
But
that's
not
the
case.
L
It's
non
broadcast,
so
we
have
to
under
this
network
and
again
make
it
so
that
if
a
packet
comes
from
any
one
is
injected
in
this
area,
it
can
go
out
of
and
two
and
three
and
four
and
five.
So
what
kind
of
structure
do
we
put
in
place
to
enable
this
NDMA
air
PL
and
that's
again,
a
point
that
Geoffrey
made
and
I
will
paraphrase?
How
can
we
make
it
so
that
the
tree
is
well
balanced
if
the
tree
is
too
fat?
L
On
the
other
hand,
if
we
make
a
very
lean
tree
like
we
could
say,
Oh
connect
this
guy
in
a
Z,
then
a
particular
packet
going
it
being
injected
here
would
be
echoed
all
the
way
before
I
can
reach
this
guy
in
go
down
this
tree.
So
there
is
this,
this
prime
of
building,
a
spanning
structure
that
that
would
enable
the
RPL
operation
in
a
way
that's
balanced
enough
fat
enough,
but
not
too
fat
kind
of,
so
that
that's
what
we've
we've
been
looking
at,
so
I
have
represented.
L
I
have
to
slice
to
explain
this
the
final
solution.
We
are
not
there
yet
but
kind
of
the
thinking
where
we
are,
and
that's
when
you
know
inputs
are
so
so
great.
So
the
thinking
we
have
is
so,
let's
be,
let's
build
this
non-broadcast
multi-access
RPL,
and
there
is
how
we
could
do
it,
because
we
want,
like
I,
said
to
balance
the
level
of
fatness
of
the
tree
that
we
are
building
between
the
sub
turf
at
the
top.
The
idea
is
basically
to
fold
even
tree.
L
We're
building
elect
a
subset
of
those
guys
as
the
the
the
level
of
the
tree
that
will
be
building
to
do
this
FBI
thing
and,
for
instance,
we
could
be
again
using
a
rush
and
the
rush
would
elect
like
the
five
for
the
10
best
those
nodes
to
be
the
reflectors
in
that
and
VMA
appear.
So
in
my
slide
sure
the
rush
has
pretty
much
determined
that
those
two
guys
are
the
best
candidates,
s3
and
s4.
As
the
best
you
know,
the
best
hash
for
whatever
we
are
doing.
L
L
Now,
ok,
so
we
still
have
non
congruent
trees,
but
instead
of
being
rooted
at
the
sub
tough,
they
are
rooted
at
the
top
and
we
have
kind
of
convert
them
in
a
way
that
it's
not
distributed
over
all
the
tough
it
was
kind
of
distributed
over
all
of
a
set
of.
But
now
we
have
kind
of
constrained
the
number
of
tough
nodes
which
participate
to
this
game,
and
now
what
we
have
to
do
is
basically
complete
the
tree
so
that
s3
can
talk
to
as
4.
L
So
that's
kind
of
the
second
step
that
we
illustrate
here
and
that's
the
piece
that
requires
an
additional
signaling.
We
have
not
found
a
way
to
do
that
without
a
mini,
more
additional
signaling
and
the
additional
signaling
that
we
need
is
to
make
sure
that
all
those
sub
trees
converge
onto
a
same
tree
as
opposed
to
making
islands
of,
for
instance,
if
there
was
s3
s4
s5
s6,
s7
s9,
which
could
all
be
roots
of
the
subtree.
L
Is
s
3
now
through
through
through
this
information
s
we
can
realize
there
is
a
root
with
the
IR
system,
ID
themself.
So
what
it
would
try
to
do
is
establish
a
link.
This
link
here
between
it
and
the
child
of
this
higher
system.
Id
right,
it
might
be
that
because
of
this
jointness
and
partition,
stuff
and
3
doesn't
see
a
child
of
s
4
right,
but
it
might
see
a
child
of
another
node
with
with
AA
system
ready,
themself
and
that
node
would
see
the
the
main
road.
L
So
that's
in
order
to
be
able
to
make
this
single
tree,
you
need
to
expose
the
not
only
yourself
but
also
the
tree.
You
belong
to,
you
need
to
say
as
SS
3
joints,
you
know
s
4
s,
4
3.
It
needs
now
to
expose
a
I'm,
a
link
to
the
tree
risk
super
routed
by
s
4.
So
that's
the
thing
which
is
not
in
Akron
signaling
is
how
to
expose
I.
Actually,
the
main
street
I
belong
to
is
is
s4,
and
now
we
could
talk.
L
So
that's
why
we
are
in
the
thinking
we
would
have
to
expose
s4
as
part
of
the
signaling
and
to
relay
it.
That's
the
name
of
the
tree
that
I'm
building
to
be
on
this
area
and
well,
is
it
agreeable
to
do
that
and
we
can
at
the
next
step,
start
seeking
a
now?
What
what
if
there
is
a
chance
challenge,
etc,
and
that's
one
we
probably
would
need
to
introduce
a
distance.
L
So
basically,
it's
all
about
building
a
tree
of
those
roots
which
was
selected
by
the
sub
tough.
So
now
we
have
joined
them
together
and
the
result
of
what
we
build.
With
this
is
a
spanning
structure
that
spans
all
the
leafs
pretty
much
all
the
sub,
tough,
all
those
which
was
selected
as
a
parent
and
some
of
the
tough
know
delimited
set,
and
we
can
decide
the
size
of
this
limited
set,
because
that's
what
decides
how
fat
the
the
the
tree
that
we
build
for
the
FPL
is-
and
that's
pretty
much
where
we
are.
L
This
is
for
the
particular
example
that
I
designed
here
so
you
see
I
just
put
some
of
the
tough
and
some
tough
nodes.
I
establish
those
links.
What
you
can
see
now
is
the
resulting
spanning
structure
and,
like
we
said
earlier,
we
can
build
a
number
of
them,
and
the
operation
of
this
pending
structure
is,
as
Jeffrey
said.
If
you
inject
the
packet
anywhere
say,
m1
injects
a
packet.
L
Now
we
can
do
the
exact
same
thing
if
we
want
to
install
a
route
like
stodgy
right.
If
this
guy
installs
wants
to
say
I
have
a
listener
for
stodgy,
we
can
send
the
advertisement
I
ever
listener
for
stodgy,
but
then
one
will
advertise
that
and
it
will
be
flooded
through
the
structure,
and
that
can
also
be
a
listener
here
which
will
be
flooded
through
the
structure.
Now
not
in
the
middle,
we'll
know
through
which
interfaces
it
got.
L
L
We
just
have
this
this
flooding
structure
and
we
can
install
you
know
other
the
filters
if
you
want
to
send
a
starchy
or
me
to
some
destinations
or
we
can
use
it
as
a
broadcast
and
for
half-assing
you
against
I'm
sure,
or
the
message
is
fast.
We
built
one
tree.
I
can
build
100
trees
right
as
many
as
I
like,
and
then
we
can
decide
which
multicast
flows
go
into
which
tree
the
thinking.
What
we
wanted
to
achieve
is
build
those
trees
proactively,
so
we
don't
make
them
dependent
on
the
star
geez.
L
We
may
make
them
dependent
on
the
giraffe
floes.
I
mean
that
you
introduced
your
flows,
but
your
juries
don't
dynamically.
Try
to
build
a
structure
like
this
is
to
each
time
you
have
a
new
group
coming
in
just
build
a
number
of
the
structures
and
then
decide
for
our
newest
algae
of
our
flow.
Do
you
use
one
of
the
structures?
Broadcast
form
ice
floes
or
what
or
do
you
just
affect
a
particular
strategy
on
one
of
those
particular
pre-existing
spending
structures
and
that's
pretty
much
where
we
are
I?
Don't
think
we
have
another
slide.
L
So
was
it
clear
two
approaches
on
the
table?
One
is
like
space
modes
and
everything
love
which
is
any
any
leaf,
an
illness
or
any
tough,
and
that
tough
we
use
the
reverse,
flooding
or
addition
or
a
reduction
path
to
reach
the
leaves
that's
option.
What
Prime
with
option
one?
We
have
to
have
different
routing
between
things
coming
from
South
and
things
coming
from
north
option
to
build
the
number
of
the
spanning
structures
use
the
tough
to
set
off
network
as
a
non
broadcast.
L
Multi-Access
RPL
build
something
into
it,
which
is
not
too
fast,
not
too
lean,
not
too
long,
and
that
completes
the
whole
sub
trees
into
a
bigger
spend
instruction.
These
are
kind
of
the
two
options
we
have
on
the
table
right
now,
any
question:
what
did
you
say
about
no
question?
Daniel
yeah
when
there's
no
question,
it
means
what.
D
L
L
So
you
see
the
origin.
I
mean
some
of
those
end
nodes
set
off
our
parents,
you
know
for
free
is
going
to
Leeds,
and
then
you
have
to
join
them.
The
another
question
which
is
an
option
here,
is
whether
you
Johanna
interested
is,
if
the
tree
after
CC,
for
instance,
and
one
anyone
has
leaves
down
this
tree
so
so,
is
an
interesting
guy
iconic
code.
L
If,
if
all
the
listeners
are
on
leaves,
then
if
you
have
somebody
like
M,
3
and
n
2,
are
you
interested
in
those
guys
because
they
don't
which
leaves
so
the
idea?
The
point
was:
oh,
let's
be
on
the
trees,
even
for
guys
like
I'm
free,
which
don't
have
leaves
behind
them,
because
it
might
be
that
later
on,
because
of
a
breakage
or
anything
else.
L
Free
might
decide
to
repair
on
two
M's
free
and
if
it's
the
only
time
when
we
start
to
build
the
whole
thing
all
the
way
north,
then
we
will
incur
delay
finishing
the
formation
of
the
tree.
So
we
decided
to
form
the
tree
even
from
nodes
like
m3,
which
don't
have
leaves
so
probably
will
not
have
listeners
because
maybe
someday
as
real
for
whether
we're
a
parent
to
them
and
want
the
tree
to
be
ready
to
operate.
L
F
I
I
I
L
If,
if
the
TUF
wants
to
be
including
in
the
tree,
they
will
operates
just
like
the
subbed
off,
they
will
select
and
then
to
you.
So
we
could,
if
we
want
it
to
have
not
represented
it
right
now.
It's
kind
of
built
with
the
South
thought
that
the
listeners
on
the
Leafs,
but
if
it's
easy
for
any
first
one,
for
instance,
if
s1
is
not
part
of
the
games
here,
to
see
that
the
trees
building
and
to
join
any
just
like
we
did
s3.
L
I
L
I
Let's
not
get
too
cute,
but
I
think
you
know
that
the
basic
signaling
is
not
a
problem,
so
we
don't
showing
in
PGP
at
all
to
build
this
kind
of
structure,
so
the
star,
comma
star
would
just
fall
out
for
free
right
yeah.
It's
only
the
star
Comanche
when
we
have
to
start
to
push
PGP
and
store
state.
If
we
want
to
prove
right,
if.
L
I
L
I
mean
maybe
one
question
to
the
group
is
check
if
we
work
wise
to
kind
of
forget
about
the
north.
Well,
the
the
sparse
mode
way
of
doing
things
which,
like
I,
said,
takes
everything
to
want
off
and
then
uses
the
reverse
flooding
prediction
path
to
reach
the
reefs.
We
cannot
remove
that
I,
just
put
it
on
on
the
side
because
of
the
difference
of
writing
north
south
to
Sue's
north.
If
there
is
any
hint
on
that,
we
were
right
to
do
that.
Then
it's
good
time
to
tell
us.
L
So
we
said
that
for
mice
flows
we
could
actually
be
a
little
G,
which
would
be
like
a
broadcast
and
would
send
to
every
leaf
through
that
broadcast
apology,
and
then
there
would
not
be
nostalgie
or
SG
Eskimo
G
state
to
the
network.
Then
we
said
a
if
we
have
what
we
call
elephant
flows.
You
can
come
back
to
to
Jefferies
slicer.
If
you
have
very
fat
flows,
we
don't
want
to
flood
them
through
the
whole
network
and
that's
when
we
would
like
to
install
star
G
or
whatever
state
in
the
tree.
L
L
L
So
you
know
on
which
side
you've
got
listeners
and
yes,
you
would
need
to
install
that
and
then,
when
there
is
a
packet,
you
would
only
instead
of
flooding
it
through
all
the
other
interface,
but
the
one
you
got
it
from
you
would
add
as
an
additional
filter
only
if
there
is
a
starchy
listener.
So
we
have
not
yet
please
explain
written
text
about
exactly
how
that
works,
but
you
may
figure
that
this
structure
is
a
logical
topology
like
a
virtual
topology
overlaid
over
the
whole
fabric,
and
that's
inside
that
you
apology.
You
flood.
F
Just
to
add
that
you
can,
you
could
just
have
simple,
a
few
star
star
or
star
C
cryptic
state
in
the
network,
and
then,
if
you
start
sending
traffic,
then
the
the
traffic
or
match
of
those
foreign
state,
whether
it's
a
star
star
or
star,
C
critics
or
it
could
be
a
US,
exist
starchy
states
and
then
before
it.
Basically,
you
it's
like
the
the
longest
match
for
unicast
case.
You
will
measure
out
the
existing
state
yeah.
D
F
D
F
L
F
L
In
the
initial
approach
to
the
one
that
goes
north
and
then
south,
we
basically
could
have
just
flooded
G
as
if
it
was
a
unique
esta,
try
so
at
any
cast
address.
Yet
so
the
packets
would
had
gone
off
and
then
they
would
have
been
flooded
because
recognized
iced,
multicast
and
flooded
along
the
flooding
or
addition
path.
So
the
signaling
was
pretty
much
the
one
we
already
have.
If
you
want
to
do,
if
you
want
to
have
s
in
the
picture,
obviously
you
don't
so.